Posts

Why "cause area" as the unit of analysis? 2021-01-25T02:53:56.482Z
Funding chains in the x-risk/AI safety ecosystem 2019-09-09T02:56:30.568Z
Donations List Website: tutorial and request for feedback 2018-08-21T04:36:18.183Z
Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission) 2017-01-20T03:57:17.689Z
June 2016 GiveWell board meeting 2016-08-17T15:25:42.662Z
An overview of Y Combinator’s non-profit program 2015-11-07T17:13:27.145Z
Meetup : Discussion Meetup: Volunteering 2015-01-04T22:01:18.847Z
Meetup : Donation Decision Day 2014-11-18T00:12:17.364Z
Meetup : Discussion with Jonathan Courtney from Giving What We Can 2014-10-29T20:16:34.309Z
Meetup : Animal Suffering Documentary Night 2014-10-13T05:18:47.293Z
Meetup : Challenges of Effective Altruism 2014-10-08T20:33:23.917Z

Comments

Comment by riceissa on What are your main reservations about identifying as an effective altruist? · 2021-03-30T19:28:29.957Z · EA · GW

For me, I don't think there is a single dominant reason. Some factors that seem relevant are:

  • Moral uncertainty, both at the object-level and regarding metaethics, which makes me uncertain about how altruistic I should be. Forming a community around "let's all be altruists" seems like an epistemic error to me, even though I am interested in figuring out how to do good in the world.
  • On a personal level, not having any close friends who identify as an effective altruist. It feels natural and good to me that a community of people interested in the same things will also tend to develop close personal bonds. The fact that I haven't been able to do this with anyone in the EA community (despite having done so with people outside the community) is an indication that EA isn't "my people".
  • Insufficiently high number of people who I feel truly "get it" or who are actually thinking. I think of most people in the movement as followers or promoters and not even doing an especially good job at it.
  • Generic dislike of labels and having identities. This doesn't explain everything though, because I feel less repulsed by some labels (e.g. I feel less upset about calling myself a "rationalist" than about calling myself an "effective altruist").
Comment by riceissa on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-18T21:47:57.866Z · EA · GW

How is Nonlinear currently funded, and how does it plan to get funding for the RFPs?

Comment by riceissa on Running an AMA on the EA Forum · 2021-02-19T23:55:50.332Z · EA · GW

Another idea is to set up conditional AMAs, e.g. "I will commit to doing an AMA if at least n people commit to asking questions." This has the benefit of giving each AMA its own time (without competing for attention with other AMAs) while trying to minimize the chance of time waste and embarrassment.

Comment by riceissa on Why "cause area" as the unit of analysis? · 2021-01-31T20:09:15.700Z · EA · GW

That one is linked from Owen's post.

Comment by riceissa on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T04:43:02.555Z · EA · GW

In the April 2020 payout report, Oliver Habryka wrote:

I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).

I'm curious to hear more about this (either from Oliver or any of the other fund managers).

Comment by riceissa on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T04:27:12.203Z · EA · GW

I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?

I think LTFF is doing something valuable by giving people the freedom to not "sell out" to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I'm worried about a situation where receiving a grant from LTFF isn't enough to be sustainable, so that people go back to doing more "safe" things like working in academia or at an established org.

Any thoughts on this topic?

Comment by riceissa on Tiny Probabilities of Vast Utilities: Concluding Arguments · 2020-11-18T20:02:22.139Z · EA · GW

Ok I see, thanks for the clarification! I didn't notice the use of the phrase "the MIRI method", which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).

Comment by riceissa on Tiny Probabilities of Vast Utilities: Concluding Arguments · 2020-11-17T23:36:50.111Z · EA · GW

MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .

The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn't involved in creating the model (although the post author seems to have sent it to MIRI before publishing the post). I wonder if I'm missing something though, or misinterpreting what you wrote.

Comment by riceissa on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-10-29T21:55:28.339Z · EA · GW

Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn't seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don't have time to write the full post.

Comment by riceissa on EA considerations regarding increasing political polarization · 2020-06-20T03:34:16.717Z · EA · GW

I think the forum software hides comments from new users by default. You can see here (and click the "play" button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.

Edit: The comments mentioned above are now visible on this post.

Comment by riceissa on Existential Risk and Economic Growth · 2020-05-12T05:54:07.144Z · EA · GW

So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Can you say how you came up with the "moving from 1% to 0.8%" part? Everything else in your comment makes sense to me.

Comment by riceissa on Existential Risk and Economic Growth · 2020-05-09T00:04:19.009Z · EA · GW

So you think the hazard rate might go from around 20% to around 1%?

I'm not attached to those specific numbers, but I think they are reasonable.

That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

Right, maybe I shouldn't have said "near zero". But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.

Comment by riceissa on Existential Risk and Economic Growth · 2020-05-07T22:47:06.504Z · EA · GW

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

I think the first option (low probability of x-risk with current technology) is driving my intuition.

Just to take some reasonable-seeming numbers (since I don't have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of existential catastrophe from anthropogenic risks within the next 100 years. If growth stopped now, I would take out unaligned AI and unforeseen/other (although "other" includes things like totalitarian regimes so maybe some of the probability mass should be kept), and would also reduce engineered pandemics (not sure by how much), which would bring the chance down to 0.3% to 4%. (Of course, this is a naive analysis since if growth stopped a bunch of other things would change, etc.)

My intuitions depend a lot on when growth stopped. If growth stopped now I would be less worried, but if it stopped after some dangerous-but-not-growth-promoting technology was invented, I would be more worried.

but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways?

I'm curious what kind of story you have in mind for current narrow AI systems leading to existential catastrophe.

Comment by riceissa on How has biosecurity/pandemic preparedness philanthropy helped with coronavirus, and how might it help with similar future situations? · 2020-04-29T03:10:11.961Z · EA · GW

Dustin Moskovitz has a relevant thread on Twitter

https://twitter.com/moskov/status/1254922931668279296

Comment by riceissa on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T03:29:27.799Z · EA · GW

The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?

Comment by riceissa on What are the key ongoing debates in EA? · 2020-03-12T07:34:49.267Z · EA · GW

I don't think you can add the percentages for "top or near top priority" and "at least significant resources". If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people.

Looking at the bar graph above the table, it looks like "at least significant resources" includes everyone in "significant resources", "near-top priority", and "top priority". For mental health it looks like "significant resources" has 37%, and "near-top priority" and "top priority" combined have 21.5% (shown as 22% in the bar graph).

So your actual calculation would just be 0.585 * .25 which is about 15%.

Comment by riceissa on COVID-19 brief for friends and family · 2020-02-28T23:59:37.418Z · EA · GW

Stocking ~1 month of nonperishable food and other necessities

Can you say more about why 1 month, instead of 2 weeks or 3 months or some other length of time?

Also can you say something about how to decide when to start eating from stored food, instead of going out to buy new food or ordering food online?

Comment by riceissa on How do you feel about the main EA facebook group? · 2020-02-16T02:30:53.731Z · EA · GW

I think that's one of the common ways for a post to be interesting, but there are other ways (e.g. asking a question that generates interesting discussion in the comments).

Comment by riceissa on How do you feel about the main EA facebook group? · 2020-02-13T02:55:31.257Z · EA · GW

This has been the case for quite a while now. There was a small discussion back in December 2016 where some people expressed similar opinions. My guess is that 2015 is the last year the group regularly had interesting posts, but I might be remembering incorrectly.

Comment by riceissa on We're Rethink Priorities. AMA. · 2019-12-12T21:44:30.739Z · EA · GW

How did you decide on "blog posts, cross-posted to EA Forum" as the main output format for your organization? How deliberate was this choice, and what were the reasons going into it? There are many other output formats that could have been chosen instead (e.g. papers, wiki pages, interactive/tool website, blog+standalone web pages, online book, timelines).

Comment by riceissa on Should we use wiki to improve knowledge management within the community? · 2019-12-09T22:28:02.029Z · EA · GW

wikieahuborg_w-20180412-history.xml contains the dump, which can be imported to a MediaWiki instance.

Comment by riceissa on Should we use wiki to improve knowledge management within the community? · 2019-12-09T09:49:46.386Z · EA · GW

Re: The old wiki on the EA Hub, I'm afraid the old wiki data got corrupted, it wasn't backed up properly and it was deemed too difficult to restore at the time :(. So it looks like the information in that wiki is now lost to the winds.

I think a dump of the wiki is available at https://archive.org/details/wiki-wikieahuborg_w.

Comment by riceissa on What is the size of the EA community? · 2019-11-20T00:07:32.241Z · EA · GW

The full metrics report gives the breakdown of number of donors by donation size and year (for 2016-2018), both as an estimate and for known number of donors.

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T08:39:20.483Z · EA · GW

Do you have any thoughts on Qualia Research Institute?

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T08:20:47.943Z · EA · GW

Over the years, you have published several pieces on ways you've changed your mind (e.g. about EA, another about EA, weird ideas, hedonic utilitarianism, and a bunch of other ideas). While I've enjoyed reading the posts and the selection of ideas, I've also found most of the posts frustrating (the hedonic utilitarianism one is an exception) because they mostly only give the direction of the update, without also giving the reasoning and additional evidence that caused the update* (e.g. in the EA post you write "I am erring on the side of writing this faster and including more of my conclusions, at the cost of not very clearly explaining why I’ve shifted positions"). Is there a reason you keep writing in this style (e.g. you don't have time, or you don't want to "give away the answers" to the reader), and if so, what is the reason?

*Why do I find this frustrating? My basic reasoning is something like this: I think this style of writing forces the reader to do a weird kind of Aumann reasoning where they have to guess what evidence/arguments Buck might have had at the start, and what evidence/arguments he subsequently saw, in order to try to reconstruct the update. When I encounter this kind of writing, I mostly just take it as social information about who believes what, without bothering to go through the Aumann reasoning (because it seems impossible or would take way too much effort). See also this comment by Wei Dai.

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:46:17.731Z · EA · GW

Do you think non-altruistic interventions for AI alignment (i.e. AI safety "prepping") make sense? If so, do you have suggestions for concrete actions to take, and if not, why do you think they don't make sense?

(Note: I previously asked a similar question addressed at someone else, but I am curious for Buck's thoughts on this.)

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:42:56.126Z · EA · GW

How do you see success/an "existential win" playing out in short timeline scenarios (e.g. less than 10 years until AGI) where alignment is non-trivial/turns out to not solve itself "by default"? For example, in these scenarios do you see MIRI building an AGI, or assisting/advising another group to do so, or something else?

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:35:45.777Z · EA · GW

[Meta] During the AMA, are you planning to distinguish (e.g. by giving short replies) between the case where you can't answer a question due to MIRI's non-disclosure policy vs the case where you won't answer a question simply because there isn't enough time/it's too much effort to answer?

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:29:57.564Z · EA · GW

The 2017 MIRI fundraiser post says "We plan to say more in the future about the criteria for strategically adequate projects in 7a" and also "A number of the points above require further explanation and motivation, and we’ll be providing more details on our view of the strategic landscape in the near future". As far as I can tell, MIRI hasn't published any further explanation of this strategic plan. Is MIRI still planning to say more about its strategic plan in the near future, and if so, is there a concrete timeframe (e.g. "in a few months", "in a year", "in two years") for publishing such an explanation?

(Note: I asked this question a while ago on LessWrong.)

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:24:49.548Z · EA · GW

I asked a question on LessWrong recently that I'm curious for your thoughts on. If you don't want to read the full text on LessWrong, the short version is: Do you think it has become harder recently (say 2013 vs 2019) to become a mathematician at MIRI? Why or why not?

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:18:48.010Z · EA · GW

In November 2018 you said "we want to hire as many people as engineers as possible; this would be dozens if we could, but it's hard to hire, so we'll more likely end up hiring more like ten over the next year". As far as I can tell, MIRI has hired 2 engineers (Edward Kmett and James Payor) since you wrote that comment. Can you comment on the discrepancy? Did hiring turn out to be much more difficult than expected? Are there not enough good engineers looking to be hired? Are there a bunch of engineers who aren't on the team page/haven't been announced yet?

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:11:38.417Z · EA · GW

On the SSC roadtrip post, you say "After our trip, I'll write up a post-mortem for other people who might be interested in doing things like this in the future". Are you still planning to write this, and if so, when do you expect to publish it?

Comment by riceissa on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T07:09:15.825Z · EA · GW

Back in July, you held an in-person Q&A at REACH and said "There are a bunch of things about AI alignment which I think are pretty important but which aren’t written up online very well. One thing I hope to do at this Q&A is try saying these things to people and see whether people think they make sense." Could you say more about what these important things are, and what was discussed at the Q&A?

Comment by riceissa on Existential Risk and Economic Growth · 2019-11-03T08:59:03.583Z · EA · GW

I read the paper (skipping almost all the math) and Philip Trammell's blog post. I'm not sure I understood the paper, and in any case I'm pretty confused about the topic of how growth influences x-risk, so I want to ask you a bunch of questions:

  1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.

  2. What do you think of Wei Dai's argument that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?

  3. What do you think of Eliezer Yudkowsky's argument that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?

  4. What do you think of Nick Bostrom's urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).

  5. Looking at Figure 7, my "story" for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I'm confused about why this happens; see the next point). Does this sound right? If not, I'm wondering if you could give a similar intuitive story.

  6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for might be the cause of this, but I'm not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).

    As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach "the end of growth" (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?

    This reminds me of the question of whether it is better to walk or run in the rain (keeping distance traveled constant). We can imagine a modification where the raindrops are motionless in the air.

Comment by riceissa on EA Hotel Fundraiser 5: Out of runway! · 2019-10-29T23:22:34.612Z · EA · GW

Can you give some examples of EA organizations that have done things the "right way" (in your view)?

Comment by riceissa on How worried should I be about a childless Disneyland? · 2019-10-28T22:17:44.601Z · EA · GW

Several background variables give rise to worldviews/outlooks about how to make the transition to a world with AGIs go well. Answering this question requires assigning values to the background variables or placing weights on the various worldviews, and then thinking about how likely "Disneyland with no children" scenarios are under each worldview, by e.g. looking at how they solve philosophical problems (particularly deliberation) and how likely obvious vs non-obvious failures are.

That is to say, I think answering questions like this is pretty difficult, and I don't think there are any deep public analyses about it. I expect most EAs who don't specialize in AI alignment to do something on the order of "under MIRI's views the main difficulty is getting any sort of alignment, so this kind of failure mode isn't the main concern, at least until we've solved alignment; under Paul's views we will sort of have control over AI systems, at least in the beginning, so this kind of failure seems like one of the many things to be worried about; overall I'm not sure how much weight I place on each view, and don't know what to think so I'll just wait for the AI alignment field to produce more insights".

Comment by riceissa on EA Hotel Fundraiser 5: Out of runway! · 2019-10-27T20:58:39.843Z · EA · GW

The inconsistency is itself a little concerning.

I am one of the contributors to the Donations List Website (DLW), the site you link to. DLW is not affiliated with the EA Hotel in anyway (although Vipul, the maintainer of DLW, made a donation to the EA Hotel). Some reasons for the discrepancy in this case:

  • As stated in bold letters at the top of the page, "Current data is preliminary and has not been completely vetted and normalized". I don't think this is the main reason in this case.
  • Pulling data into DLW is not automatic, so there is a lag between when the donations are made and when they appear on DLW.
  • DLW only tracks public donations.
Comment by riceissa on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-10T21:44:14.502Z · EA · GW

The reason may be somewhat simple: most AI alignment researchers do not participate (post or comment) on LW/AF or participate only a little.

I'm wondering how many such people there are. Specifically, how many people (i) don't participate on LW/AF, (ii) don't already get paid for AI alignment work, and (iii) do seriously want to spend a significant amount of time working on AI alignment or already do so in their free time? (So I want to exclude researchers at organizations, random people who contact 80,000 Hours for advice on how to get involved, people who attend a MIRI workshop or AI safety camp but then happily go back to doing non-alignment work, etc.) My own feeling before reading your comment was that there are maybe 10-20 such people, but it sounds like there may be many more than that. Do you have a specific number in mind?

if you follow just LW, your understanding of the field of AI safety is likely somewhat distorted

I'm aware of this, and I've seen Wei Dai's post and the comments there. Personally I don't see an easy way to get access to more private discussions due to a variety of factors (not being invited to workshops, some workshops being too expensive for it to be worth traveling to, not being eligible to apply for certain programs, and so on).

Comment by riceissa on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-10T05:24:24.585Z · EA · GW

A trend I've noticed in the AI safety independent research grants for the past two rounds (April and August) is that most of the grantees have little to no online presence as far as I know (they could be using pseudonyms I am unaware of); I believe Alex Turner and David Manheim are the only exceptions. However, when I think about "who am I most excited to give individual research grants to, if I had that kind of money?", the names I come up with are people who leave interesting comments and posts on LessWrong about AI safety. (This isn't surprising because I mostly interact with the AI safety community publicly online, so I don't have much access to private info.) To give an idea of the kind of people I am thinking of, I would name John Wentworth, Steve Byrnes, Ofer G., Morgan Sinclaire, and Evan Hubinger as examples.

This has me wondering what's going on. Some possibilities I can think of:

  1. the people who contribute on LW aren't applying for grants
  2. the private people are higher quality than the online people
  3. the private people have more credentials than the online people (e.g. Hertz Fellowship, math contests experience)
  4. fund managers are more receptive offline than online and it's easier to network offline
  5. fund managers don't follow online discussions closely

I would appreciate if the fund managers could weigh in on this so I have a better sense of why my own thinking seems to diverge so much from the actual grant recommendations.

Comment by riceissa on What should Founders Pledge research? · 2019-09-11T02:13:51.022Z · EA · GW

various people's pressure on OpenPhil to fund MIRI

I'm curious what this is referring to. Are there specific instances of such pressure being applied on Open Phil that you could point to?

Comment by riceissa on Funding chains in the x-risk/AI safety ecosystem · 2019-09-09T22:20:30.427Z · EA · GW

this graph is also fairly misleading by putting OpenPhil on the same footing as an individual ETG-funder, although OpenPhil is disbursing wholly 1000x more funds

See my reply to Ozzie.

Also, do you think by moving the nodes around you could reduce the extent to which lines cross over each other, to increase clarity?

I added three additional graphs that use different layout algorithms in here. I don't know if they're any better.

Comment by riceissa on Funding chains in the x-risk/AI safety ecosystem · 2019-09-09T21:57:17.057Z · EA · GW

I suggest adding labels to the edges to state a rough number of funding

I find that remembering the typical grant/donation size of a donor is easier than remembering all the connections between different donors and donees, so having the edges visually represented (without further decorating the edges) captures most of the value of the exercise. I realize that others who don't follow the EA granting space as closely as I do may feel differently.

Perhaps it would ideally be an interactive application

I don't have experience making such applications, so I will let someone else do this.

was there any reason for having Patrick in particular on the top of this?

The node positions were chosen by Graphviz, so I didn't choose to put Patrick on top. I included Patrick because Vipul suggested doing this (I would guess because Patrick was the most available example of an ETG donor who has given to many x-risk charities).

Comment by riceissa on Why were people skeptical about RAISE? · 2019-09-04T20:45:48.812Z · EA · GW

I'm not sure I understand the difference between mathematical thinking and mathematical knowledge. Could you briefly explain or give a reference? (e.g. I am wondering what it would look like if someone had a lot of one and very little of the other)

Comment by riceissa on Cause X Guide · 2019-09-01T21:15:46.922Z · EA · GW

It seems to me that this post has introduced a new definition of cause X that is weaker (i.e. easier to satisfy) than the one used by CEA.

This post defines cause X as:

The concept behind a “cause X” is that there could be a cause neglected by the EA community but that is as important, or more important, to work on than the four currently established EA cause areas.

But from Will MacAskill's talk:

What are the sorts of major moral problems that in several hundred years we'll look back and think, "Wow, we were barbarians!"? What are the major issues that we haven't even conceptualized today?

I will refer to this as Cause X.

See also the first paragraph of Emanuele Ascani's answer here.

From the "New causes one could consider" list in this post, I think only Invertebrates and Moral circle expansion would qualify as a potential cause X under CEA's definition (the others already have researchers/organizations working on them full-time, or wouldn't sound crazy to the average person).

I think it would be good to have a separate term specifically for the cause areas that seem especially crazy or unconceptualized, since searching for causes in this stricter class likely requires different strategies, more open-mindedness, etc.

Related: Guarded definition.

Comment by riceissa on Long-Term Future Fund: April 2019 grant recommendations · 2019-08-25T21:04:14.218Z · EA · GW

Hi Oliver, are you still planning to reply to this? (I'm not involved with this project, but I was curious to hear your feedback on it.)

Comment by riceissa on I find this forum increasingly difficult to navigate · 2019-07-05T18:18:12.518Z · EA · GW

filtering by highest rating over several different time ranges

The EA Forum Reader I made a while ago has the ability to do this. The top view shows posts in order of score, and one can filter by various date ranges ("Restrict date range: Today · This week · This month · Last three months · This year · All time" exactly like on the old forum). In addition, the "Archive" links (in the sidebar on desktop, or at the bottom of the page on mobile) in the top view show the top posts from the given time period, so e.g. one can view the top posts in 2018 or the top posts in February 2019.

Comment by riceissa on Needed EA-related Articles on the English Wikipedia · 2019-06-29T03:38:48.135Z · EA · GW

Open Phil used to have its own page; see e.g. this version and the revision history for some context. (Disclosure: I wrote the original version of the page.)

Comment by riceissa on Needed EA-related Articles on the English Wikipedia · 2019-06-27T20:47:54.977Z · EA · GW

Animal Charity Evaluators previously had a page on Wikipedia, but was deleted after discussion. You can see a copy of what the page looked like, which can also be used in case someone wants to write the page.

My guess (based on intuition/experience and without spending any time digging up sources) is that almost all of these do not meet Wikipedia's general notability guideline, so it is somewhat uncertain as to whether they would survive for long (if someone were to write the page). In other words, they might be deleted like the ACE article.

The Chivers book will likely meet the notability criteria for books (if it hasn't already).

Comment by riceissa on EA Forum Prize: Winners for March 2019 · 2019-05-07T13:12:49.708Z · EA · GW

Can you clarify which timezone is being used to determine whether a post is published in one month vs another? (A post I am curious about was published in March in some timezones but in April in others, so I'm wondering if it was even considered for the March prize.)

Comment by riceissa on Legal psychedelic retreats launching in Jamaica · 2019-04-18T22:57:28.156Z · EA · GW

I don't see this as a risk for EA/rationalist types though, and would argue that pretty strongly.

Would you be willing to supply this argument? I am very curious to hear more about your thinking on this, as it is something I have wondered about. (For the sake of transparency, I should mention that my own take is that there is a significant risk even for EAs and rationalists to be overtaken by unscientific thinking after strong psychedelic experiences, and that it takes unusually solid worldviews and/or some sort of personality trait that is hard-to-describe in order to resist this influence.)