Posts

Is anyone in EA currently looking at short-term famine alleviation as a possible high-impact opportunity this year? 2021-08-06T22:14:07.199Z
COVID-19 Assessment Tool by the Human Diagnosis Project 2020-04-02T07:27:31.518Z

Comments

Comment by Ben_Harack on [Cause Exploration Prizes] Fix the Money, Fix the world · 2022-09-07T13:47:31.588Z · EA · GW

My impression is that cryptocurrencies face some major challenges in achieving some of the basic functions of money. For example, check out Bitcoin – The Promise and Limits of Private Innovation in Monetary and Payment Systems by Beer and Weber. They argue (persuasively in my opinion) that traditional currencies have three functions that cryptocurrencies would have significant challenges in achieving, replicating, or challenging:

  1. unit of account
  2. means of payment
  3. store of value

Based on the Beer and Weber paper, here's my brief expansion of these problems as they relate to Bitcoin: 

  • Having prices in one currency is more efficient than having them in many. Thus, having just one currency (e.g., a national currency) is more efficient than having multiple. Also, having just one currency facilitates trade. Bitcoin in particular struggles to achieve this because a) it introduces a separate unit of account, b) is has no single and identified issuer, and c) its quantity is fixed. 
  • If prices in the cryptocurrency change dramatically and unpredictably compared to standard currencies, then measuring prices in the cryptocurrency don’t make sense.  It’s hard to imagine this asymmetry being overcome entirely unless a single cryptocurrency became so widespread and dominant that it achieved more stability than reserve currencies. 
  • Since there is no institution guaranteeing that the currency will be a reliable store of value, the currency is not a reliable store of value. Bitcoin in particular is a speculative asset.

See also the Impossible Trinity in international political economy to get a sense of yet another problem that cryptocurrencies do not yet have a solution for. 

I have not yet seen solutions to these problems and there appear to be good reasons to believe that a system similar to Bitcoin cannot solve them. Future cryptocurrencies may put a serious dent in these problems, but I remain skeptical of their near-term (within a few decades) potential to "fix money".

Comment by Ben_Harack on What are effective ways to help Ukrainians right now? · 2022-06-17T09:55:19.564Z · EA · GW

A few months later, I want to note that my impression is that the Red Cross is indeed quite ineffective in this regard (helping Ukraine in the war). Other options are better. I came to this conclusion soon after writing the above comment, but I didn't come back here (till now) to correct myself. I still think that the original comment in this thread was made in good faith, and thus I wouldn't downvote it. I did, however, want to make clear that my thoughts had evolved significantly after writing the above comment.

Comment by Ben_Harack on The longtermist AI governance landscape: a basic overview · 2022-06-17T09:28:56.674Z · EA · GW

Convergence also does a lot of work on the strategic level.

Comment by Ben_Harack on What are effective ways to help Ukrainians right now? · 2022-03-11T19:54:19.721Z · EA · GW

This is a legit suggestion, so I'm going to strongly upvote the comment. Not sure why the downvotes are coming in, other than, as you say, perhaps indicating that people think that the Red Cross is ineffective, or that Canadian-specific multipliers aren't highly relevant for this discussion.

Comment by Ben_Harack on As an independent researcher, how do you stay or become motivated, productive, and impactful? · 2022-03-11T19:16:27.879Z · EA · GW

Most of these are pithy statements that serve as reminders of much more complicated and nuanced ideas. This is a mix of recitation types, only some of which are explicitly related to motivation. I've summarized, rephrased, and expanded most of these for clarity, and cut entire sections that are too esoteric. Also, something I'd love to try, but haven't, is putting some of these into a spaced repetition practice (I use Anki), since I've heard surprisingly positive things about how well that works.

  1. Be ruthlessly efficient today
  2. <Specific reminder about a habit that I'm seeking to break>
  3. Brainstorm, then execute
  4. If you don't have a plan for it, it isn't going to happen.
  5. A long list of things that you want to do is no excuse for not doing any of them.
  6. Make an extraordinary effort.
  7. <Reminders about particular physical/emotional needs that are not adequately covered by existing habits>
  8. Remember the spheres of control: Total control. Some control. No control. For more info, see here: https://www.precisionnutrition.com/wp-content/uploads/2019/09/Sphere-of-control-FF.pdf
  9. Every problem is an opportunity
  10. What you do today is important because you are exchanging a day of your life for it. (might be from Heartsill Wilson)
  11. Think about what isn't being said, but needs to be.
  12. Get results
  13. Life is finite; pursue your cares.
  14. The opposite of play is not work. The opposite of play is depression. (paraphrased from Simon Sutton-Smith)
  15. Move gently
  16. Weighted version of "shortest processing time" scheduling algorithm is close to optimal on all metrics. (from "Algorithms to live by")
  17. Exponential backoff for relationships: finite investment, infinite patience. (from "Algorithms to live by")
  18. Doing things right vs doing the right thing.
  19. 10-10-10. (Reference to the technique of thinking about how a decision would be viewed 10 minutes, 10 months, and 10 years in the future. Modify at your discretion.)
  20. Bookending (think of extreme cases of what you are trying to predict)
  21. Triage - nowadays I'd tell people to go read Holly Elmore's writeup on this, with an emphasis on "We are always in triage. I fervently hope that one day we will be able to save everyone. In the meantime, it is irresponsible to pretend that we aren’t making life and death decisions with the allocation of our resources. Pretending there is no choice only makes our decisions worse."
  22. <several reminders about how I want to act in my relationships>
  23. Change expectations and you change people, including yourself.
  24. What you see is all there is. (Fallacy explained by Philip Tetlock)
  25. The bait and switch - replacing a hard question with an easy one. (Fallacy explained by Philip Tetlock)
  26. Reverse the phrasing of questions and statements (a standard technique for testing the credibility / reasonableness / usefulness of statements or questions)
  27. Destroy your fear of criticism.
  28. Constructive critiques are precious.
  29. <Various personal techniques for de-stressing>
  30. Use counterfactuals.
  31. Mental parliament. (Also see related Moral Parliament or "personal board of directors" ideas.)
  32. Try hard for five minutes. (Reference to Yudkowsky's techniques of this sort.)
  33. The 80-20 rule. Focus on doing 80% of the good.
  34. Deep work.
  35. Every moment is practice. What are you practicing?
  36. Murphyjitsu.
  37. Remember that it is the "experiencing self" who has to execute any plan you make.
  38. Do your work in a way that allows other people to follow you.
  39. Clairvoyance test (Another Tetlock idea: if you passed the question to someone who could see the future, they could give you the answer without having to come back for a re-specification of what the question actually is.)
  40. When you reach the end of what you can comprehend, you probably haven't found nature's limits, but your own.
  41. Wittgenstein's ruler (Unless you have confidence in the ruler's reliability, if you use a ruler to measure a table you may also be using the table to measure the ruler.)
     
Comment by Ben_Harack on As an independent researcher, what are the biggest bottlenecks (if any) to your motivation, productivity, or impact? · 2022-03-10T20:54:39.150Z · EA · GW

For me what leaps to mind is all of the in-between stuff, like proofreading, LateX issues, graphics, plots, etc. Of course, I've also tried to hire help on some of these fronts with very mixed results (generally negative). So I guess I'd say that fundamentally, independent work can really suffer from its independence (not having various supports and connections that would make it better). Building relationships and collaborations that alleviate these problems is part of being an effective independent researcher.

Comment by Ben_Harack on As an independent researcher, how do you stay or become motivated, productive, and impactful? · 2022-03-10T20:46:52.360Z · EA · GW

Prioritize ruthlessly. Very few ideas can even be examined, let alone pursued.

Comment by Ben_Harack on As an independent researcher, how do you stay or become motivated, productive, and impactful? · 2022-03-10T20:43:34.591Z · EA · GW

Productivity + meta: Learn to be an effective Red Team, and use this ability on your own ideas and plans. 

Comment by Ben_Harack on As an independent researcher, how do you stay or become motivated, productive, and impactful? · 2022-03-10T20:42:26.348Z · EA · GW

Motivation: Find a way to remind yourself about what you care about (and if needed, why you care about it). This could manifest in any way that works for your. A post-it could be useful. A calendar notification. A standing meeting with colleagues where you do a moment of reflection (a technique that I've seen used to great effect at the Human Diagnosis Project). A list of recitations embedded among TODO list items (my personal technique). 

Comment by Ben_Harack on As an independent researcher, how do you stay or become motivated, productive, and impactful? · 2022-03-10T20:32:03.368Z · EA · GW

Allocate some time to "meta", like studying habit formation and self-management. For starters I might recommend Atomic Habits and some of Cal Newport's work.

Comment by Ben_Harack on The Future Fund’s Project Ideas Competition · 2022-03-10T20:27:04.068Z · EA · GW

Sad that I missed this! Only saw this the day after it closed.

Comment by Ben_Harack on Towards an EA Governance? · 2022-03-09T15:33:33.786Z · EA · GW

I agree that there's a lot to like about this vision. Some of my own work aims in this direction (see Ruling Ourselves if you're interested). Tractability is a major concern however. Major changes like these may very well be possible, but it's very difficult to demonstrate (huge burden of proof) that particular actions can create a world like this. To develop these ideas further, I suggest taking the part of this vision that excites you most (perhaps part that seems more important and tractable than the rest) and really dig deep for a while. It is really useful to understand why a particular governance system has ended up in the equilibrium that it's in. This kind of insight can enable effective work.

Comment by Ben_Harack on Is there an umbrella organization for small EA-ish nonprofits in the US? · 2022-03-09T00:57:08.023Z · EA · GW

(I'm not a lawyer. I'm commenting based on some experience doing similar things in the U.S.)

It depends on what area you are working on. There are a variety of orgs whose mandates span large parts of EA-space. If you know what area you're working on, I suggest focusing on orgs that are closely related to that area. I think that nonprofits have to be able to show how their activities relate to their declared mandate/mission.

Comment by Ben_Harack on What are some examples of EA <-> Academia collaborations? · 2022-02-19T15:28:36.728Z · EA · GW

Just off the top of my head, take a look at the things done by BERI, SERI, other "existential risk initiative" projects, FLI, Effective Thesis, GovAI, ALLFED, and various projects of CSER and OpenPhil that support universities (e.g., the Forethought Foundation, 80,000 Hours). This list is very incomplete, but it gestures in the direction of the kinds of things that I see as EA-Academia collaborations or cross-pollinations. 

Comment by Ben_Harack on Introducing High-Impact Medicine (Hi-Med) · 2021-12-30T20:02:31.489Z · EA · GW

People interested in High Impact Medicine may also be interested in the Human Diagnosis Project (see http://humandx.org and the "Human Dx" app on the main app stores).  The Project intends to solve the problem of medical diagnosis for all of humanity. Currently it allows physicians to train their skills and collaborate on answering thorny medical questions. Eventually it will hopefully provide significant diagnostic help (via both collaboration and decision support) for both medical workers everywhere as well as the broader public. The Project would benefit greatly from additional engaged physicians who are interested in helping people.

Personal context: I work on the Engineering team at the Human Diagnosis Project.

Comment by Ben_Harack on [deleted post] 2021-08-21T02:23:55.201Z

I looked into this a bit during 2014-2017. At the time I thought it was plausible that mechanisms similar to state failure (including even significant underdevelopment such that effective policing never becomes possible) might be the source of a noteworthy amount of existential risk. I mentioned this in passing in Ruling Ourselves

Bostrom's "Vulnerable World Hypothesis" also contains some ideas that point in this direction.

Since then I've updated pretty strongly in the direction of focusing on advanced nations and great powers. As far as I can tell, it will be these nations that shape the development and use of every transformative technology that has shown up on my radar. Thus, I now focus heavily on great powers. 

State collapse is probably fairly heavily studied in the realm of nuclear security (think post-Soviet countries, Pakistan, and North Korea for starters), which for traditional IR is about as close as one gets to existential risk.

Comment by Ben_Harack on Impact Certificates on a Blockchain · 2021-08-11T13:41:14.163Z · EA · GW

Interesting, thanks for the reply! Let me unpack what I'm thinking of when I say "if such a system existed". Here are some things I'm imagining in such a scenario:

Ideally, there is a market already (not just the potential for one, as that link indicates), or there is a clear plan and a number of EAs that I know the names of who have said that they will participate. I'm willing to be an early adopter, but I'm not in a position where I can vet the fundamentals of the project. For example, I'd like to see people who were involved in the prior attempts to do Certificates of Impact endorsing a plan. Similarly, I'd like to see analysis from a different and identifiable person who is an expert with crypto. I'm just conversant in crypto, and I find most of the writing here to be very somewhat inaccessible due to its length and complexity.

The above is currently my main set of cruxes, but here are a few expanded thoughts on things I'd like to see:

  • What is the precise plan (not just a discussion of tradeoffs and technical possibilities). 
    • Example: A blog post (or several) detailing exactly how the Certificates of Impact system works, how each type of individual can interact with it in all the expected ways, and what its constraints are. After reading such documentation, someone with close to zero crypto knowledge should be able to participate.
    • Nice to have: A community-vetted website or portal hosted on a reliable domain that simplifies the interactions in the market so all unneeded and removed complexity and terminology is hidden.
  • Public scrutiny of the system by identifiable crypto people in the community. At my level of knowledge, the only way that I can feasibly be relatively certain that the system would likely be sane is that I've seen it publicly scrutinized by people who are extremely skilled in this sort of thing.

I realize that what I'm asking for is costly. From my perspective, these requirements seem to be pretty fundamental for us actually kickstarting  a vibrant impact cert market. 

On the flip side, I think there's a lot of potential for such a system, so I'd see this work as quite plausibly very high impact and thus hopefully a mini-cause around which folks can coordinate. Personally, I can try to rally support once a system exists (see above), but I'm not currently in a position to rally community leaders nor get crypto experts to scrutinize the plan.

Comment by Ben_Harack on What posts do you want someone to write? · 2021-08-08T00:49:44.330Z · EA · GW

Credible qualitative and/or quantitative evidence on the effectiveness of habits, tools, and techniques for knowledge work.

Comment by Ben_Harack on What things did you do to gain experience related to EA? · 2021-08-08T00:12:29.305Z · EA · GW

I pursued related research prior to learning about EA, attended EA Global a few times, joined a startup that is EA-aligned (the Human Diagnosis Project), conducted more research on the side, and provided both mentorship and collaboration for other researchers.

Comment by Ben_Harack on Impact Certificates on a Blockchain · 2021-08-08T00:07:28.563Z · EA · GW

I'll try to directly answer some of the questions raised.

I'm generally interested in this project. If such a system existed, I'd probably issue certificates for research artifacts (papers, blog posts, software, datasets, etc.) and would advocate for the usage of impact certificates more broadly. 

If I were able to reliably buy arbitrary fractions of certificates on an open market, I'd probably do so somewhat often (every several weeks) in order to send signals of value. My personal expenditures would be very small (a few hundreds of dollars per year probably unless something significantly changes), but I'd also try to influence others to get involved similarly. 

As for concerns, I'm very uncertain about my position on the diverging concerns raised and argued by RyanCarey and gwern in this thread. As a creator, I can imagine wanting access to the entirety (or at least the majority) of the value of certificates attached to my work. As an observer of a market, I'd like for it to generally be open for speculation and revaluation, etc. Perhaps I'd be in favor of a system that splits the difference somehow, perhaps via smart contracts that enforce a split of resale royalties (most going to the creator, some going to the prior owner)? 

Relatedly, I'd love to see a workable / understandable / intuitive system for revaluation of a certificate as various parties end up owning various parts of it, bought at differing prices (if such a thing is possible). I can imagine myself wanting to send a signal that a cert should be valued more highly by buying a small fraction of it for higher than the going rate. I may also just be unfamiliar with pricing schemes for fractional ownership and prices like this.

Comment by Ben_Harack on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-06T22:27:19.992Z · EA · GW

The Human Diagnosis Project (disclaimer: I currently work there). If successful, it will be a major step toward accurate medical diagnosis for all of humanity.

Comment by Ben_Harack on How have you become more (or less) engaged with EA in the last year? · 2021-08-04T15:51:36.793Z · EA · GW

I'm late to the party on this reply, but I'll try to reply as if I'm doing so in late 2020.

Yes, I'm more engaged than I was in 2019, and that's saying something considering that I was pretty engaged in 2019: working at an EA-aligned org (the Human Diagnosis Project), participating in EAG, joining Modeling Cooperation, building other collaborations, writing blog posts, etc.

What changed?
1. The Human Diagnosis Project continues to make headway toward the possibility of (very) significant impact and my role there increased substantially in responsibility.

2. During 2020 I systematically pursued knowledge relating to some of my key interests (e.g., International Relations and game theory) and this exposure seems to have opened a lot of conceptual doors for me. This substantially increased my belief that I can make significant contributions to EA and thus increased my motivation.

Comment by Ben_Harack on COVID-19 Assessment Tool by the Human Diagnosis Project · 2020-04-25T18:58:30.499Z · EA · GW

An update here: This COVID-19 forward triage tool now also allows anyone to get a doctor to look at their particular case for an extremely low fee ($12 USD - though free service is currently available if needed).

Comment by Ben_Harack on Growth and the case against randomista development · 2020-01-20T05:19:46.658Z · EA · GW

Thanks for this piece, I thought it was interesting!

A small error I noticed while reading through one of the references is that the line "For example, France’s GDP per capita is around 60% of US GDP per capita.[7]" is incorrectly summarizing the cited material. The value needs to be 67% to make this sentence correct. The relevant section in the underlying material is: "As an example, suppose we wish to compare living standards in France and the United States. GDP per person is markedly lower in France: France had a per capita GDP in 2005 of just 67 percent of the U.S. value. Consumption per person in France was even lower — only 60 percent of the U.S., even adding government consumption to private consumption."

Comment by Ben_Harack on Healthy Competition · 2019-11-15T17:01:29.171Z · EA · GW

I believe that regional talent pools could also be another factor in favor of the multiple organization scenario. For example, something I think a lot about is how the USA could really use an institution like the Future of Humanity Institute (FHI) in the long run. In addition to all of the points made in the original post, I think that such an institution would improve the overall health of the ecosystem of "FHI-like research" by drawing on a talent pool that is at least somewhat non-overlapping with that drawn upon by FHI.

I think that the talent pools are at least somewhat distinct because a) crossing borders is often logistically challenging or impossible, depending on the scenario; and b) not all job candidates can relocate to the United Kingdom for a variety of personal reasons.

If anyone is interested in discussing a "FHI-like institution in the USA" further, please get in touch with me either via direct message or via ben.harack at visionofearth.org.


Comment by Ben_Harack on What actions would obviously decrease x-risk? · 2019-10-18T03:50:48.298Z · EA · GW

This line of inquiry (that rebuilding after wars is quite different from other periods of time) is explored in G. John Ikenberry's After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order After Major Wars. A quick and entertaining summary of the book - and how it has held up since its publication - was written by Ikenberry in 2018: Reflections on After Victory.

Comment by Ben_Harack on What actions would obviously decrease x-risk? · 2019-10-18T03:42:22.655Z · EA · GW

While I'm sympathetic to this view (since I held it for much of my life), I have also learned that there are very significant risks to developing this capacity naively.

To my knowledge, one of the first people to talk publicly about this was Carl Sagan, who discussed this in his television show Cosmos (1980), and in these publications:

Harris, A., Canavan, G., Sagan, C. and Ostro, S., 1994. The Deflection Dilemma: Use Vs. Misuse of Technologies for Avoiding Interplanetary Collision Hazards.

Ben's summary:

  • Their primary concern and point is that a system built to defend humanity from natural asteroids would actually expose us to more risk (of anthropogenic origin) than it would mitigate (of natural origin).
  • Opportunities for misuse of the system depend almost solely on the capability of that system to produce delta-V changes in asteroids (equivalently framed as “response time”). A system capable of ~1m/s delta V would be capable of about 100 times as many misuses as its intended uses. That is, it would see ~100 opportunities for misuse for each opportunity for defending Earth from an asteroid.
  • They say that a high capability system (capable of deflection with only a few days notice) would be imprudent to build at this time.

Sagan, C. and Ostro, S.J., 1994. Dangers of asteroid deflection. Nature, 368(6471), p.501.

Sagan, C., 1992. Between enemies. Bulletin of the Atomic Scientists, 48(4), p.24.

Sagan, C. and Ostro, S.J., 1994. Long-range consequences of interplanetary collisions. Issues in Science and Technology, 10(4), pp.67-72.

Two interesting quotes from the last one:

  • “There is no other way known in which a small number of nuclear weapons can destroy global civilization.”
  • “No matter what reassurances are given, the acquisition of such a package of technologies by any nation is bound to raise serious anxieties worldwide.”

More recently, my collaborator Kyle Laskowski and I have reviewed the relevant technologies (and likely incentives) and have come to a somewhat similar position, which I would summarize as: the advent of asteroid manipulation technologies exposes humanity to catastrophic risk; if left ungoverned, these technologies would open the door to existential risk. If governed, this risk can be reduced to essentially zero. (However, other approaches, such as differential technological development and differential engineering projects do not seem capable of entirely closing off this risk. Governance seems to be crucial.)

So, we presented a poster at EAG 2019 SF: Governing the Emerging Risk Posed By Asteroid Manipulation Technologies where we summarized these ideas. We're currently expanding this into a paper. If anyone is keenly interested in this topic, reach out to us (contact info is on poster).

Comment by Ben_Harack on Do we know how many big asteroids could impact Earth? · 2019-10-02T17:12:49.447Z · EA · GW

Epistemic status: I don't have a citation handy for the following arguments, so any reader should consider them merely the embedded beliefs of someone who has spent a significant amount of time studying the solar system and the risks of asteroids.

No, I believe that dark Damocloids will be largely invisible (when they are far away from the sun) even to the new round of telescopes that are being deployed for surveying asteroids. They're very dark and (typically) very far away.

Luckily, I think the consensus is that they're only a small portion of the risk. Most of the risk comes from the near-Earth asteroids (NEAs), since due to orbital mechanics they have many opportunities (~1 per year or so) to strike the Earth, while comets fly through the inner solar system extremely rarely. Thus, as we've moved towards finding all of the really big NEAs, we've moved very significantly towards knowing about the vast majority of the possible "civilization ending" or "mass extinction" events in our near future. There will still be a (very) long tail of real risk here due to objects like the Damocloids, but most of the natural risk of asteroids will be addressed if we completely understand the NEAs.

Comment by Ben_Harack on Cause X Guide · 2019-10-02T16:49:24.286Z · EA · GW

Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.

I agree completely regarding information hazards. We've been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we're talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we're in new territory. We've definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven't seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).

If you're interested in the project itself, or in further discussions of these hazards/opportunities, let me know!

Regarding the "arms race" terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren't the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.

Comment by Ben_Harack on Do we know how many big asteroids could impact Earth? · 2019-09-06T04:35:05.107Z · EA · GW

After reviewing the literature pretty extensively over the last several months for a related project (the risks of human-directed asteroids), it seems to me that there is a strong academic consensus that we've found most of the big ones (though definitely not all - and many people are working hard to create ways for us to find the rest). See this graphic for a good summary of our current status circa 2018: https://www.esa.int/spaceinimages/Images/2018/06/Asteroid_danger_explained

Comment by Ben_Harack on Cause X Guide · 2019-09-03T15:54:24.211Z · EA · GW

Recently, I've been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.

At the moment, we're expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).

Comment by Ben_Harack on What book(s) would you want a gifted teenager to come across? · 2019-08-30T17:58:35.197Z · EA · GW

Factfulness by Hans Rosling is currently my go-to recommendation for the most important single book I could hand to a generic person.

Why do I hold it in such high regard? I think that it does a good job of teaching us both about the world and about ourselves at the same time. It helps the reader achieve better knowledge and better ability to think clearly (and come to accurate beliefs about the world). It's also very hopeful despite its tendency to tackle head-on some of the darker aspects of our world.

Comment by Ben_Harack on Progress book recommendations · 2019-08-30T17:50:41.216Z · EA · GW

Under "Decision-making and Forecasting" I would add these two:

Superforecasting: The Art and Science of Prediction

Factfulness: Ten Reasons We're Wrong About the World--and Why Things Are Better Than You Think

(Though Factfulness also touches on numerous other categories in the list.)

Comment by Ben_Harack on Who would you like to see do an AMA on this forum? · 2019-08-26T03:26:18.591Z · EA · GW

Toby Ord

Comment by Ben_Harack on Is the community short of software engineers after all? · 2018-05-03T17:27:43.627Z · EA · GW

Following up on this more than a year later, I can vouch for some but not all of these conclusions based on my experience at the high-impact organization I work for, the Human Diagnosis Project (www.humandx.org).

We've found it very difficult to recruit high-quality value-aligned engineers despite the fact that none of the above items really apply to us.

  • Our software engineering team performs very challenging work all over the stack - including infrastructure, backend, and mobile.
  • Working here is probably great for career development (in part because we're on the bleeding edge of numerous technologies and give our engineers exposure to many technologies).
  • We pay similar salaries to other early-stage startups in Silicon Valley (and New York).

On problem I can identify right now is that I've attempted to recruit from the EA community a few times with very limited success. Perhaps I've gone about this via the wrong fora or have made other mistakes, but I've found that generally any candidates I did find were not good fits for the roles that we have to offer.

This problem continues to this day. Given that we don't have the issues identified above (to my knowledge), my best hypothesis right now is that we're simply unable to reach the right people in the right way - and I'm not sure how to fix that. If anyone has any particular ideas on this front, I'd love to hear them.

That said, if anyone wants to help us out, we're still actively recruiting for a host of roles, including a lot of engineering positions. To learn more, take a look at https://www.humandx.org/team