Posts

Even More Ambitious Altruistic Tech Efforts 2021-11-20T14:59:16.497Z
Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z
My Career Decision-Making Process 2021-01-21T20:17:04.421Z
[Linkpost] Nature Aging: A new Nature journal dedicated to aging research 2021-01-15T17:50:34.950Z

Comments

Comment by ShayBenMoshe (shaybenmoshe) on Why does GiveWell not provide lower and upper estimates for the cost-effectiveness of its top charities? · 2022-07-31T20:00:45.366Z · EA · GW

Not answering the question, but I would like to quickly mention a few of the benefits of having confidence/credible intervals or otherwise quantifying uncertainty. All of these comments are fairly general, and are not specific criticisms of GiveWell's work. 

  1. Decision making under risk aversion - Donors (large or small) may have different levels of risk aversion. In particular, some donors might prefer having higher certainty of actually making an impact at the cost of having a lower expected value. Moreover, (mostly large) donors could build a portfolio of different donations in order to achieve a better risk profile. To that end, one needs to know more about the distribution rather than a point-estimate.
  2. Point-estimates are many times done badly - It is fairly easy to make many kinds of mistakes when doing point-estimates, some of which are more noticeable when quantifying uncertainties. To name one example, point-estimates of cost-effectiveness typically try to estimate the expected value, and is many times calculated as a product of different factors. While it is true that expected value is multiplicative (assuming that the factors are uncorrelated or, more generally, independent, which is also sometimes not the case but that's another problem), this is not true for other statistics, such as the median. I think it is a common mistake to use an estimate of the median for the mean, or something in between, which in many cases are wildly different.
  3. Sensitivity analysis - Quantifying uncertainty allows for sensitivity analysis, which serves many purposes, one of which is to get more accurate (point-)estimate and reduce uncertainty. One example is by understanding which parameters are the most uncertain, and focus further (internal and external) research on improving their certainty.

In direct response to Hazelfire's comment, I think that even if the uncertainty spans only one order of magnitude (he mentioned 2-3, which seems reasonable to me), this could have a really larger effect on resource allocation. The bar for funding is currently 8x relative to GiveDirectly IIRC, which is one order of magnitude, so gaining a better understanding of the certainty could be really important. For instance, we could learn that some interventions which are currently above the bar, are not very clearly so, whereas other interventions which seem to be under the bar but very close to it, could turn out to be fairly certain and thus perhaps a very safe bet.

I think that all of these effects could have a large influence on GiveWell's recommendations and donors choices, future research, and directly on getting more accurate point-estimates (which could potentially be fairly big).

Comment by ShayBenMoshe (shaybenmoshe) on Critiques of EA that I want to read · 2022-06-20T14:26:57.352Z · EA · GW

Yeah, that makes sense, and is fairly clear selection bias. Since here in Israel we have a very strong tech hub and many people finishing their military service in elite tech units, I see the opposite selection bias, of people not finding too many EA (or even EA-inspired) opportunities that are of interest to them.

I failed to mention that I think your post was great, and I would also love to see (most of) these critiques flashed out.

Comment by ShayBenMoshe (shaybenmoshe) on Critiques of EA that I want to read · 2022-06-20T14:06:39.684Z · EA · GW

The fact that everyone in EA finds the work we do interesting and/or fun should be treated with more suspicion.

I would like to agree with Aaron's comment and make a stronger claim - my impression is that many EAs around me in Israel, especially those coming from a strong technical background, don't find most direct EA-work very intellectually interesting or fun (ignoring its impact).

Speaking for myself, my background is mostly in pure math and in cyber-security research / software engineering. Putting aside managerial and entrepreneurial roles, it seems to me that most of the roles in EA(-adjacent) organizations open for someone with background similar to mine are:

  1. Research similar to research at Rethink Priorities or GiveWell - It seems to me that this research mostly involves literature review and analysis of existing research. I find this kind of work to be somewhat interesting, but not nearly as intrinsically interesting as the things I have done so far.
  2. Technical AI safety - This could potentially be very interesting for someone like me, however, I am not convinced by the arguments for the relatively high importance or tractability of AI safety conveyed by EA. In fact, this is where I worry said critique might be right, on the community level, I worry that we are biased by motivated reasoning.
  3. Software engineering - Most of the software needs in EA(-adjacent) organizations seem to be fairly simple technically (but the product and "market-fit" could be hard). As such, for someone looking for more research type of work or more complicated technical problems, this is not very appealing.

Additionally, most of the roles are not available in Israel or open for remote work.

In fact, I think this is a point where the EA community misses many highly capable individuals who could otherwise do great work, if we had interesting enough roles for them.

Comment by ShayBenMoshe (shaybenmoshe) on Announcing Alvea—An EA COVID Vaccine Project · 2022-02-18T21:30:29.507Z · EA · GW

I am extremely impressed by this, and this is a great example of the kind of ambitious projects I would love to see more of in the EA community. I have added it to the list on my post Even More Ambitious Altruistic Tech Efforts.

Best of luck!

Comment by ShayBenMoshe (shaybenmoshe) on Why and how to be excited about megaprojects · 2022-01-25T07:33:00.545Z · EA · GW

I completely agree with everything you said (and my previous comment was trying to convey a part of this, admittedly in much less transparent way).

Comment by ShayBenMoshe (shaybenmoshe) on Why and how to be excited about megaprojects · 2022-01-24T21:28:17.765Z · EA · GW

I simply disagree with your conclusion - it all boils down to what we have at hand. Doubling the cost-effectiveness also requires work, it doesn't happen by magic. If you are not constrained by highly effective projects which can use your resources, sure, go for it. As it seems though, we have much more resources than current small scale projects are able to absorb, and there's a lot of "left-over" resources. Thus, it makes sense to start allocating resources to some less effective stuff.

Comment by ShayBenMoshe (shaybenmoshe) on Why and how to be excited about megaprojects · 2022-01-24T21:00:55.979Z · EA · GW

I agree with the spirit of this post (and have upvoted it) but I think it kind of obscures the really simple thing going on: the (expected) impact of a project is by definition the cost-effectiveness (also called efficiency) times the cost (or resources).
A 2-fold increase in one, while keeping the other fixed, is literally the same as having the roles reversed.

The question then is what projects we are able to execute, that is, both come up with an efficient idea, and have the resources to execute it. When resources are scarce, you really want to squeeze as much as you can from the efficiency part. Now that we have more resources, we should be more lax, and increase our total impact by pursuing less efficient ideas that still achieve high impact. Right now it starts to look like there's much more resources ready to be deployed, than projects which are able to absorb them.

Comment by ShayBenMoshe (shaybenmoshe) on Democratising Risk - or how EA deals with critics · 2021-12-30T21:03:53.123Z · EA · GW

I am not sure that there is actually a disagreement between you and Guy.
If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work  in the field and their contributions to the relevant research community.
This does not seem to conflict what you said, as the focus is still on work on that specific topic.

Comment by ShayBenMoshe (shaybenmoshe) on Flimsy Pet Theories, Enormous Initiatives · 2021-12-10T21:21:50.333Z · EA · GW

I strongly agree with this post and it's message.

I also want to respond to Jason Crawford's response. We don't necessarily need to move to a situation where everyone tries to optimize things as you suggest, but at this point it seems that almost no one tries to optimize for the right thing. I think even changing this to a few percents of entrepreneurial work or philanthropy could have tremendous effect, without losing much of the creative spark people worry we might lose, or maybe gain even more, as new directions open.

Comment by ShayBenMoshe (shaybenmoshe) on Even More Ambitious Altruistic Tech Efforts · 2021-11-20T21:38:29.172Z · EA · GW

That's great, thanks!
I was aware of Anthropic, but not of the figures behind it.

Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names.

Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.

Comment by ShayBenMoshe (shaybenmoshe) on Even More Ambitious Altruistic Tech Efforts · 2021-11-20T15:47:18.969Z · EA · GW

Thanks for clarifying Ozzie!
(Just to be clear, this post is not an attack on you or on your position, both of which I highly appreciate :). Instead, I was trying to raise a related point, which seems extremely important to me and I was thinking about recently, and making sure the discussion doesn't converge to a single point).

With regards to the funding situation, I agree that many tech projects could be funded via traditional VCs, but some might not be, especially those that are not expected to be very financially rewarding or very risky (a few examples that come to mind are the research units of the HMOs in Israel, tech benefitting people in the developing world [e.g. Sella's teams at Google], basic research enabling applications later [e.g. research on mental health]). An EA VC which funds projects based mostly on expected impact might be a good idea to consider!

Comment by ShayBenMoshe (shaybenmoshe) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-20T15:05:40.120Z · EA · GW

I wrote a response post Even More Ambitious Altruistic Tech Efforts, and I would love to spinoff relevant discussion there. The tl;dr is that I think we should have even more ambitious goals, and try to initiate projects that potentially have a very large direct impact (rather than focus on tools and infrastructure for other efforts).

Also, thanks for writing this post Ozzie. Despite my disagreements with your post, I mostly agree with your opinions and think that more attention should be steered towards such efforts.

Comment by ShayBenMoshe (shaybenmoshe) on CEA grew a lot in the past year · 2021-11-07T13:47:59.654Z · EA · GW

I just want to add, on top of Haydn's comment to your comment, that:

  1. You don't need the treatment and the control group to be of the same size, so you could, for instance, randomize among the top 300 candidates.

  2. In my experience, when there isn't a clear metric for ordering, it is extremely hard to make clear judgements. Therefore, I think that in practice, it is very likely that let's say places 100-200 in their ranking seem very similar.

I think that these two factors, combined with Haydn's suggestion to take the top candidates and exclude them from the study, make it very reasonable, and of very low cost.

Comment by ShayBenMoshe (shaybenmoshe) on Has anyone done any work on how donating to lab grown meat research (https://new-harvest.org/) might compare to Giving Green's recommendations for fighting climate change? · 2021-04-30T13:30:13.332Z · EA · GW

Last August Stijn wrote a post titled The extreme cost-effectiveness of cell-based meat R&D about this subject.
Let me quote the bottom line (emphasis mine):

This means one euro extra funding spares 100 vertebrate land animals. Including captured and aquaculture fish (also fish used for fish meal for farm animals), the number becomes an order 10 higher: 1000 vertebrate animals saved per euro.
...
Used as carbon offsetting, cell-based meat R&D has a price around 0,1 euro per ton CO2e averted.

In addition, as I wrote in a comment, I also did a back of the envelope guesstimate model to estimate the cost-effectiveness of donations to GFI, and arrived at $1.4 per ton CO2e (and $0.05-$5.42 for 90% CI).

It is important to mention that our methods are not nearly as thorough as the work done by Giving Green or Founders Pledge about climate change, and I wouldn't take it too seriously. Nevertheless, I think that it at least hints the order of magnitude of the true numbers.

Edit: I just realized that Brian's comment refers to a newer post by Stijn, which I assume reflects his broader opinions. However I think that the discussion in the comments on Stijn's older post that I linked to is also interesting to read.

Comment by ShayBenMoshe (shaybenmoshe) on List of Under-Investigated Fields - Matthew McAteer · 2021-01-30T21:13:18.210Z · EA · GW

Thanks for linking this, this looks really interesting! If anyone is aware of other similar lists, or of more information about those fields and their importance (whether positive or negative), I would be interested in that.

Comment by ShayBenMoshe (shaybenmoshe) on My Career Decision-Making Process · 2021-01-30T09:29:54.295Z · EA · GW

Thanks for detailing your thoughts on these issues! I'm glad to hear that you are aware of the different problems and tensions, and made informed decisions about them, and I look forward to seeing the changed you mentioned being implemented.

I want to add one comment about to the How to plan your career article, if it's already mentioned. I think it's really great, but it might be a little bit too long for many readers' first exposure. I just realized that you have a summary on the Career planning page, which is good, but I think it might be too short. I found the (older) How to make tough career decisions article very helpful and I think it offers a great balance of information and length, and I personally still refer people to it for their first exposure. I think it will be very useful to have a version of this page (i.e. of similar length), reflecting the process described in the new article.

With regards to longtermism (and expected values), I think that indeed I disagree with the views taken by most of 80,000 hours' team, and that's ok. I do wish you offered a more balanced take on these matters, and maybe even separate the parts which are pretty much a consensus in EA from more specific views you take so that people can make their own informed decisions, but I know that it might be too much to ask and the lines are very blurred in any case.

Comment by ShayBenMoshe (shaybenmoshe) on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-01-28T09:01:51.803Z · EA · GW

Thanks for publishing negative results. I think that it is important to do so in general, and especially given that many other group may have relied on your previous recommendations.

If possible, I think you should edit the previous post to reflect your new findings and link to this post.

Comment by ShayBenMoshe (shaybenmoshe) on (Autistic) visionaries are not natural-born leaders · 2021-01-27T12:34:52.750Z · EA · GW

Thanks to Aaron for updating us, and thanks guzey for adding the clarification in the head of the post.

Comment by ShayBenMoshe (shaybenmoshe) on How EA Philippines got a Community Building Grant, and how I decided to leave my job to do EA-aligned work full-time · 2021-01-27T11:10:13.362Z · EA · GW

Thank you for writing this post Brian. I appreciate your choices and would be interested to hear in the future (say in a year, and even after) how things worked out, how excited will you be about your work, and if you will be able to sustain this financially.

I also appreciate the fact that you took the time to explicitly write those caveats.

Comment by ShayBenMoshe (shaybenmoshe) on (Autistic) visionaries are not natural-born leaders · 2021-01-26T13:29:58.031Z · EA · GW

I meant the difference between using the two, I don't doubt that you understand the difference between autism and (lack of) leadership. In any case, this was not main point, which is that the word autistic in the title does not help your post in any way, and spreads misinformation.

I do find the rest of the post insightful, and I don't think you are intentionally trying to start a controversy. If you really believe that this helps your post, please explain why (you haven't so far).

Comment by ShayBenMoshe (shaybenmoshe) on (Autistic) visionaries are not natural-born leaders · 2021-01-26T12:54:13.875Z · EA · GW

I don't understand how you can seriously not understand that difference between the two. Autism is a developmental disorder, which manifests itself in many ways, most of which are completely irrelevant to your post. Whereas being a "terrible leader", as you call them, is a personal trait which does not resemble autism in almost any way.

Furthermore, the word autistic in the title is not only completely speculative, but also does not help your case at all.

I think that by using that term so explicitly in your title, you spread misinformation, and with no good reason. I ask you to change the title, or let the forum moderators handle this situation.

Comment by ShayBenMoshe (shaybenmoshe) on My Career Decision-Making Process · 2021-01-25T19:54:22.590Z · EA · GW

Hey Arden, thanks for asking about that. Let me start by also thanking you for all the good work you do at 80,000 Hours, and in particular for the various pieces you wrote that I linked to at 8. General Helpful Resources.

Regarding the key ideas vs old career guide, I have several thoughts which I have written below. Because 80,000 Hours' content is so central to EA, I think that this discussion is extremely important. I would love to hear your thoughts about this Arden, and I will be glad if others could share their views as well, or even have a separate discussion somewhere else just about this topic.

Content

I think that two important aspects of the old career guide are much less emphasized in the key ideas page: the first is general advice on how to have a successful career, and the second is how to make a plan and get a job. Generally speaking, I felt like the old career guide gave more tools to the reader, rather than only information. Of course, the key ideas page also discusses these issues to some extent, but much less so than the previous career guide. I think that these were very good career advice which could potentially have a large effect on your readers' careers.

Another important point is that I don't like, and disagree with the choice of, the emphasis on longtermism and AI safety. Personally, I am not completely persuaded by the arguments for choosing a career by a longtermist view, and even less by the arguments for AI safety. More importantly, I had several conversations with people in the Israeli EA community and with people I gave career consultation to, who were alienated by this emphasis. A minority of them felt like me, and the majority understood it as "all you can meaningfully do in EA is AI safety", which was very discouraging for them. I understand that this is not your only focus, but people whose first exposure to your website is the key ideas page might get that feeling, if they are not explicitly told otherwise.

Another point is that the "Global priorities" section takes a completely top-to-bottom approach. I do agree that it is sometimes a good approach, but I think that many times it is not. One reason is the tension between opportunities and cause areas which I already wrote about. The other is that some people might already have their career going, or are particularly interested in a specific path. In these situations, while it is true that they can change their careers or realize that they can enjoy a broader collection of careers, it is somewhat irrelevant and discouraging to read about rethinking all of your basic choices. Instead, in these situations it would be much better to help people to optimize their current path towards more important goals. Just to give an example, someone who studies law might get the impression that his choice is wrong and not beneficial, while I believe that if they tried they could find highly impactful opportunities (for example the recently established Legal Priorities Project looks very promising).

I think that these are my major points, but I do have some other smaller reservations about the content (for example I disagree with the principle of maximizing expected value, and definitely don't think that this is the way it should be phrased as part of the "the big picture").

Writing Style

I really liked the structure of the previous career guide. It was very straightforward to know what you are about to read and where you can find something, since it was so clearly separated into different pages with clear titles and summaries. Furthermore, its modularity made it very easy to read the parts you are interested in. The key ideas page is much more convoluted, it is very hard to navigate and all of the expandable boxes are not making it easier.

Comment by ShayBenMoshe (shaybenmoshe) on My Career Decision-Making Process · 2021-01-23T20:13:13.901Z · EA · GW

Thanks for spelling out your thoughts, these are good points and questions!

With regards to potentially impactful problems in health. First, you mentioned anti-aging, and I wish to emphasize that I didn't try to assess it at any point (I am saying this because I recently wrote a post linking to a new Nature journal dedicated to anti-aging). Second, I feel that I am still too new to this domain to really have anything serious to say, and I hope to learn more myself as I progress in my PhD and work at KSM institute. That said, my impression (which is mostly based on conversations with my new advisor) is that there are many areas in health which are much more neglected compared to others, and in particular receive much less attention from the AI and ML community. From my very limited experience, it seems to me that AI and ML techniques are just starting to be applied to problems in public health and related fields, at least in research institutes outside of the for-profit startup scene. I wish I had something more specific to say, and hopefully I will have in a year or two from now.

I completely agree with your view on AI for good being "a robustly good career path in many ways". I would like mention once more that in order to have a really large impact in it though, one needs to really optimize for that and avoid the trap of lower counterfactual impact (at least in later stages of the career, after they have enough experience and credentials).

It is very hard for me to say where the highest impact position are, and this is somewhat related to the view that I express at the subsection Opportunities and Cause Areas. I imagine that the best opportunities for someone in this field highly depend on their location, connections and experience. For example, in my case it seemed that joining the floods predictions efforts at Google, and the computational healthcare PhD, are significantly better options than the next options in the AI and ML world.

With regards to entering the field, I am super new to this, so I can't really answer. In any case, I think that entering to the fields of AI, ML and data science is no different for people in EA than others, so I would follow the general recommendations. In my situation, I had enough other credentials (background in math and in programming/cyber-security) to make people believe that I can become productive in ML after a relatively short time (though at least one place did reject me for not having background in ML), so I jumped right in to working on real-world problems rather than dedicating time to studying.

As to estimating impact of a specific role or project, I think it is sometimes fairly straightforward (when the problem is well-defined and the probabilities are fairly high, you can "just do the math" [don't forget to account for counterfactuals!]), while in other cases it might be difficult (for example more basic research or things having more indirect effects). In the latter case, I think it is helpful to have a rough estimate - understand how large the scope is (how many people have a certain disease or die from it every year?), figure out who is working on the problem and which techniques they use, try to estimate how much of the problem you imagine you can solve (e.g. can we eliminate the disease? [probably not.] how many people can we realistically reach? how expensive is the solution going to be?). All of this together can help you in figuring out the orders of magnitudes you are talking about. Let me give a very rough example of an outcomes of these estimates: A project will take roughly 1-3 years, seems likely to succeed, and if successful, will significantly improve the lives of 200-800 people suffering from some disease every year, and there's only one other team working on the exact same problem. This sounds great! Changing the variables a little might make it seem much less attractive, for example if only 4 people will be able to pay for the solution (or suffer from it to being with), or if there are 15 other teams working on exactly the same problem, in which case your impact will probably be much lower. One can also imagine projects with lower chances of success, which if successful will have a much larger effect. I tend to be cautious in these cases, because I think that it is much easier to be wrong about small probabilities (I can say more about this).

Let me also mention that it possible to work on multiple projects at the same time, or over a few years, especially if each one consist of several steps in which gain more information and you can re-evaluate them along the way. In such cases, you'd expect some of the projects to succeed, and learn how to calibrate your estimates over time.

Lastly, with regards to your description of my views, that's almost right, except that I also see opportunities for high impact not only on particularly important problems but also on smaller problems which are neglected for some reason (e.g. things that are less prestigious or don't have economic incentives). I'd also add that at least in my case in computational healthcare I also intend to apply other techniques from computer science besides AI and ML (but that's really a different story than AI for good).

This comment already becomes way too long, so I will stop here. I hope that it is somewhat useful, and, again, if someone wants me to write more about a specific aspect, I will gladly do so.

Comment by ShayBenMoshe (shaybenmoshe) on My Career Decision-Making Process · 2021-01-23T15:56:36.954Z · EA · GW

Thanks for your comment Michelle! If you have any other comments to make on my process (both positive and negative), I think that would be very valuable for me and for other readers as well.

Important Edit: Everything I wrote below refers only to technical cyber-security (and formal verification) roles. I don't have strong views on whether governance, advocacy or other types of work related to those fields could be impactful. My intuition is that these are indeed more promising than technical roles.

I don't see any particularly important problem that can be addressed using cyber-security or formal verification (now or in the future), which is not already being addressed by the private or public sector. Surely these areas are important for the world, and therefore are utilized and researched outside of EA. For example, (too) many cyber-security companies provide solutions for other organizations (including critical organizations such as hospitals and electricity providers) to protect their data and computer systems. Another example is be governments using cyber-security tools for intelligence operations and surveillance. Both examples are obviously important, but not at all neglected.

One could argue that EA organizations need to protect their data and computer systems as well, which is definitely true, but can easily be solved by purchasing the appropriate products or hiring infosec officers, just like in any other organization. Other than that I didn't find any place where cyber-security can be meaningfully applied to assist EA goals.

As for formal verification, I believe that the case is similar - these kinds of tools are useful for certain (and very limited) problems in the software and hardware industry, but I am unaware of any interesting applications for EA causes. One caveat is that I believe that it is plausible (but not very probable) that formal verification can be used for AI alignment, as I outlined in this comment.

My conclusion is that, right now, I wouldn't recommended people in EA to build skills in any of these areas for the sake of having direct impact (of course cyber-security is a great industry for EtG). To convince me otherwise, someone would have to come up with a reasonable suggestions where these tools could be applied. If anyone has any such ideas (even rough ideas), I would love to hear them!

Comment by ShayBenMoshe (shaybenmoshe) on My Career Decision-Making Process · 2021-01-22T16:48:38.127Z · EA · GW

This is a very good question and I have some thoughts about it.

Let me begin by answering about my specific situation. As I said, I have many years of experience in programming and cyber security. Given my background and connections (mostly from the army) it was fairly easy for me to find multiple companies I could work for as a contractor/part-time employee. In particular, in the past 3 years I have worked part-time in cyber security and had a lot of flexibility in my hours. Furthermore, I am certain that it is also possible to find such positions in more standard software development areas. In fact, just before I finished high school, I took a part-time front-end development position in some Israeli startup.

As for other people, it is harder for me to say. I imagine that it will not be so easy for someone who just graduated to find a high paying part-time job, but that highly depends on location, domain, and past experience. Generally, I believe that this path mostly suits people who already have some experience in their field or are willing to work as freelancers and face a slower progress in this part of their careers. For example, it can work very well for people already pursuing (ordinary) EtG or who are at later stages of their careers and want to switch to a different career path.

Edit - If this is something people are interested in, I can write a more detailed post about this idea specifically, where we can also have a longer discussion in the comments.

Comment by ShayBenMoshe (shaybenmoshe) on Open Philanthropy: 2020 Allocation to GiveWell Top Charities · 2020-12-30T13:03:46.593Z · EA · GW

Thanks for cross-posting this, I probably wouldn't hear about this otherwise.

I am very interested in Open Phil's model regarding the best time to donate for such causes. If anyone is aware of similar models for large donors, I would love to hear about them.

Comment by ShayBenMoshe (shaybenmoshe) on My upcoming CEEALAR stay · 2020-12-14T22:08:11.667Z · EA · GW

Thanks for sharing that, that sounds like an interesting plan.

A while ago I was trying to think about potential ways to have large impact via formal verification (after reading this post). I didn't give it much attention, but it looks like others and I don't see a case for this career path to be highly impactful, but I'd to love be proven wrong. I would appreciate it if you could elaborate on your perspective on this. I should probably mention that I couldn't find a reference to formal verification at agent foundations (but I didn't really read it), and Vanessa seemed to reference it as a tangential point, but I might be wrong about both.

I'm interested in formal verification from a purely mathematical point of view. That is, I think it's important for math (but I don't think that formalizing [mainstream] math is likely to be very impactful outside of math). Additionally, I am interested in ideas developed in homotopy type theory, because of their connections to homotopy theory, rather than because I think it is impactful.

Comment by ShayBenMoshe (shaybenmoshe) on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-06T07:39:11.306Z · EA · GW

With regards to FIRE, I myself still haven't figured out how this fits with my donations. In any case, I think that giving money to beggars sums up to less than $5 per month in my case (and probably even less on average), but I guess that also depends on where you live etc.

Comment by ShayBenMoshe (shaybenmoshe) on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-05T12:44:39.212Z · EA · GW

I would like to reiterate Edo's answer, and add my perspective.

First and foremost, I believe that one can follow EA perspectives (e.g. donate effectively) AND be kind and helpful to strangers, rather than OR (repeating an argument I made before in another context).
In particular, I personally don't write giving a couple of dollars in my donation sheet, and it does not affect my EA-related giving (at least not intentionally).

Additionally, they constitute such a little fraction of my other spending, that I don't notice them financially.
Despite that, I truly believe that being kind to strangers, giving a few coins, or trying to help in other ways, can meaningfully help the other person (even if not as cost-effectively as donating to, say, GiveWell).

I don't view this and my other donations as means to achieve the exact same goal, but rather as two distinct and non-competing ways to achieve the purpose of making the world better.

Comment by ShayBenMoshe (shaybenmoshe) on The effect of cash transfers on subjective well-being and mental health · 2020-11-25T07:15:15.460Z · EA · GW

Thank you for following up and clarifying that.

Comment by ShayBenMoshe (shaybenmoshe) on The effect of cash transfers on subjective well-being and mental health · 2020-11-21T19:54:56.654Z · EA · GW

I see, thanks for the teaser :)

I was under the impression that you have rough estimate for some charities (e.g. StrongMinds). Looking forward to see your future work on that.

Comment by ShayBenMoshe (shaybenmoshe) on The effect of cash transfers on subjective well-being and mental health · 2020-11-21T13:40:34.341Z · EA · GW

Thanks for posting that. I'm really excited about HLI's work in general, and especially the work on the kinds of effects you are trying to estimate in this post!

I personally don't have a clear picture of how much $ / WELLBY is considered good (whereas GiveWell's estimates for their leading charities is around 50-100 $ / QALY). Do you have a table or something like that on your website, summarizing your results for charities you found to be highly effectively, for reference?

Thanks again!

Comment by ShayBenMoshe (shaybenmoshe) on Have you ever used a Fermi calculation to make a personal career decision? · 2020-11-09T20:54:20.208Z · EA · GW

I recently made a big career change, and I am planning to write a detailed post on this soon. In particular, it will touch this point.

I did use use Fermi calculation to estimate my impact in my career options.
In some areas it was fairly straightforward (the problem is well defined, it is possible to meaningfully estimate the percentage of problem expected to be solved, etc.). However, in other areas I am clueless as to how to really estimate this (the problem is huge and it isn't clear where I will fit in, my part in the problem is not very clear, there are too many other factors and actors, etc.).

In my case, I had 2 leading options, one of which was reasonable to amenable to these kind of estimates, and the other - not so much. The interesting thing was that in the first case, my potential impact turned out to be around the same order of magnitude as EtG, maybe a little bit more (though there is a big confidence interval).

All in all, I think this is a helpful method to gain some understanding of the things you can expect to achieve, though, as usual, these estimates shouldn't be taken too seriously in my opinion.

Comment by ShayBenMoshe (shaybenmoshe) on Prioritization in Science - current view · 2020-11-04T16:03:02.692Z · EA · GW

I think another interesting example to compare to (which also relates to Asaf Ifergan's comment) is private research institutes and labs. I think they are much more focused on specific goals, and give their researchers different incentives than academia, although the actual work might be very similar. These kinds of organizations span a long range between academia and industry.

There are of course many such example, some of which are successful and somre are probably not that much. Here are some examples that come to my mind: OpenAI, DeepMind, The Institute for Advanced Study, Bell Labs, Allen Institute for Artificial Intelligence, MIGAL (Israel).

Comment by ShayBenMoshe (shaybenmoshe) on A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) · 2020-10-27T07:39:10.293Z · EA · GW

I just wanted to say that I really like your idea, and at least at the intuitive level it sounds like it could work. Looking forward to the assessment of real-world usage!

Also, the website itself looks great, and very easy to use.

Comment by ShayBenMoshe (shaybenmoshe) on Hiring engineers and researchers to help align GPT-3 · 2020-10-05T21:32:21.454Z · EA · GW

Thanks for the response.
I believe this answers the first part, why GPT-3 poses an x-risk specifically.

Did you or anyone else ever write what aligning a system like GPT-3 looks like? I have to admit that it's hard for me to even have a definition of being (intent) aligned for a system GPT-3, which is not really an agent on its own. How do you define or measure something like this?

Comment by ShayBenMoshe (shaybenmoshe) on Paris-compliant offsets - a high leverage climate intervention? · 2020-10-05T20:21:54.026Z · EA · GW

Great, thanks!

Comment by ShayBenMoshe (shaybenmoshe) on Paris-compliant offsets - a high leverage climate intervention? · 2020-10-05T18:01:59.508Z · EA · GW

Thanks for posting this!

Here is a link to the full report: The Oxford Principles for Net Zero Aligned Carbon Offsetting
(I think it's a good practice to include a link to the original reference when possible.)

Comment by ShayBenMoshe (shaybenmoshe) on Hiring engineers and researchers to help align GPT-3 · 2020-10-03T20:18:40.689Z · EA · GW

Quick question - are these positions relevant as remote positions (not in the US)?

(I wrote this comment separately, because I think it will be interesting to a different, and probably smaller, group of people than the other one.)

Comment by ShayBenMoshe (shaybenmoshe) on Hiring engineers and researchers to help align GPT-3 · 2020-10-03T20:15:06.258Z · EA · GW

Thank you for posting this, Paul. I have questions about two different aspects.

In the beginning of your post you suggest that this is "the real thing" and that these systems "could pose an existential risk if scaled up".
I personally, and I believe other members of the community, would like to learn more about your reasoning.
In particular, do you think that GPT-3 specifically could pose an existential risk (for example if it falls into the wrong hands, or scaled up sufficiently)? If so, why, and what is a plausible mechanism by which it poses an x-risk?

On a different matter, what does aligning GPT-3 (or similar systems) mean for you concretely? What would the optimal result of your team's work look like?
(This question assumes that GPT-3 is indeed a "prosaic" AI system, and that we will not gain a fundamental understanding of intelligence by this work.)

Thanks again!

Comment by ShayBenMoshe (shaybenmoshe) on Does using the mortality cost of carbon make reducing emissions comparable with health interventions? · 2020-09-26T14:15:15.714Z · EA · GW

At some point I tried to estimate this too and got similar results. This raised several of points:

  1. I am not sure what the mortality cost of carbon actually measures:
    1. I believe that the cost of additional ton of carbon depends on the amount of total carbon released already (for example in a 1C warming scenario, it is probably very different than in a 3.5C warming scenario).
    2. The carbon and its effect will stay there and affect people for some unknown time (could be indefinitely, could be until we capture it, or until we got extinct, or some other option). This could highly alter the result, depending on the time span you use.
  2. The solutions offered by top charities of GiveWell are highly scalable. I think the same can not be said about CATF, and perhaps about CfRN as well. Therefore, if you want to compare global dev to climate change, it might be better to compare to something which can absorb at least hundreds of millions of dollars yearly. (That said, it is of course still a fair comparison to compare CATF to a specific GiveWell recommended charity.)
  3. The confidence interval you get (and that I got) is big. In your case it spans 2 order of magnitude, and this does not take into account the uncertainty in the mortality cost of carbon. I imagine that if we followed the previous point and used something larger for comparison, the $/carbon will have higher confidence. However, I believe that the first point at least indicates that the mortality cost of carbon will have a very large confidence interval.
    This is in contrast with the confidence interval in GiveWell's estimates, which is (if I recall correctly) much narrower.

I would love to hear any responses to these points (in particular, I guess there are some concrete answers to the first point, which will also shed light on the confidence interval of mortality cost of carbon).

To conclude, I personally believe that climate change interventions could save lives at a cost similar to that of global dev interventions, but I also believe that the confidence interval for those will be much much higher.

Comment by ShayBenMoshe (shaybenmoshe) on Keynesian Altruism · 2020-09-16T07:03:54.437Z · EA · GW

I agree that it isn't easy to quantify all of these.

Here is something you could do, which unfortunately does not take into account the changes in charities operation at different times, but is quite easy to do (all of the figures should be in real terms).

  1. Choose a large interval of time (say 1900 to 2020), and at each point (say every month or year), decide how much you invest vs how much you donate, according to your strategy (and others).
  2. Choose a model for how much money you have (for example, starting with a fixed amount, or say receiving a fixed amount every year, or receiving an amount depending on the return on investment in the previous year).
  3. Sum up the total money donated over the course of that interval, and calculate how money you have in the end.

Then, you can compare for different strategies the two values at the end. You can also sum the total donated and the money left, pretending to donate everything left at the end of the interval. Or you could adjust your strategies such that no money is left at the end.

Comment by ShayBenMoshe (shaybenmoshe) on Keynesian Altruism · 2020-09-14T12:48:26.061Z · EA · GW

Thanks for posting this, this is very interesting.

Did you by any chance try to models this? It would be interesting for example to compare different strategies and how would they work given past data.

Comment by ShayBenMoshe (shaybenmoshe) on Book Review: Deontology by Jeremy Bentham · 2020-08-13T15:48:08.262Z · EA · GW

Thanks for writing this! I really like the way you write, which I found both fun and light and, at the same time, highlighting the important parts vividly. I too was surprised to learn that this is the version of utilitarianism Bentham had in his mind, and I find the views expressed in your summary (Ergo) lovely too.

Comment by ShayBenMoshe (shaybenmoshe) on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-11T19:38:39.227Z · EA · GW

I too was surprised when I first read your post. I find it reassuring that our estimates are not far from each other, although the models are essentially different. I suppose we both neglect some aspects of the problem, although both models are somewhat conservative.

I agree that it is probably the case that cell-based meat is very cost-effective at greenhouse gas reduction, and I would love to more sophisticated models than ours.

Comment by ShayBenMoshe (shaybenmoshe) on Research Summary: The Subjective Experience of Time · 2020-08-11T19:28:07.914Z · EA · GW

Thank you for the eloquent response, and for the pointers to the parts of your posts relevant to the matter.

I think I understand your position, and I will dig deeper into your previous posts to get a more complete picture of your view. Thanks once more!

Comment by ShayBenMoshe (shaybenmoshe) on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-11T14:56:23.305Z · EA · GW

Thanks for sharing your computation. This highly resonates with a (very rough) back of the envelope estimate I ran for the cost-effectiveness of the Good Food Institute, the guesstimate model is here https://www.getguesstimate.com/models/16617. The result (which shouldn't be taken to literally) is $1.4 per ton CO2e (and $0.05-$5.42 for 90% CI).

I can give more details on how my model works, but very roughly I try to estimate the amount of CO2e saved by clean meat in general, and then try to estimate how much earlier will that happen because of GFI. Again, this is very rough, and I'd love any input, or comparison to other models.

Comment by ShayBenMoshe (shaybenmoshe) on Research Summary: The Subjective Experience of Time · 2020-08-10T19:55:06.558Z · EA · GW

Thank you for writing this summary (and conducting this research project)!

I have a question. I am not sure what the standard terminology is, but there are (at least) two different kinds of mental processes: reflexes/automatic response and thoughts or experiences which span longer times. I am not certain which are more related to capacity for welfare, but I guess it is the latter. Additionally I imagine that the experience of time is more relevant for the former. This suggests that maybe the two are not really correlated. Have you thought about this? Is my view of the situation flawed?

Thanks again!

Comment by ShayBenMoshe (shaybenmoshe) on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-08-05T17:49:35.669Z · EA · GW

As someone in the intersection of these subjects I tend to agree with your conclusion, and with your next comment to Arden describing the design-implementation relationship.

Edit 19 Feb 2022: I want to clarify my position, namely, that I don't see formal verification as a promising career path. As for what I write below, I both don't believe it is a very practical suggestions, and I am not at all sold on AI safety.

However, while thinking about this, I did come up with a (very rough) idea for AI alignment , where formal verification could play a significant role.
One scenario for AGI takeoff, or for solving AI alignment, is to do it inductively - that is, each generation of agents designs the next generation, which should be more sophisticated (and hopefully still aligned). Perhaps one plan to do achieve this is as follows (I'm not claiming that any step is easy or even plausible):

  1. Formally define what it means for an agent to be aligned, in such a way that subsequent agents designed by this agent are also aligned.
  2. Build your first generation of AI agents (which should be lean and simple as possible, to make the next step easier).
  3. Let a (perhaps computer assisted) human prove that the first generation of AI is aligned in the formal sense of 1.

Then, once you deploy the first generation of agents, it is their job to formally prove that further agents designed by them are aligned as well. Hopefully, since they are very intelligent, and plausibly good at manipulating the previous formal proofs, they can find such proofs. Since the proof is formal, humans can trust and verify it (for example using traditional formal proof checkers), despite not being able to come up with the proof themselves.

This plan has many pitfalls (for example, each step may turn out to be extremely hard to carry out, or maybe your definition of alignment will be so strict that the agents won't be able to construct any new and interesting aligned agents), however it is a possible way to be certain about having aligned AI.

Comment by ShayBenMoshe (shaybenmoshe) on Climate change donation recommendations · 2020-07-19T18:09:30.072Z · EA · GW

I agree with your main argument, but I think that the current situation is that we have no estimate at all, and this is bad. We literally have no idea if GFI averts 1 ton CO2e at $0.01 or at $1000. I believe having some very rough estimates could be very useful, and not that hard to do.

Also, I completely agree that splitting donations is a very good idea, and I personally do it (and in particular donated to both CATF and GFI in the past).