Posts

Comments

Comment by shaybenmoshe on The effect of cash transfers on subjective well-being and mental health · 2020-11-25T07:15:15.460Z · EA · GW

Thank you for following up and clarifying that.

Comment by shaybenmoshe on The effect of cash transfers on subjective well-being and mental health · 2020-11-21T19:54:56.654Z · EA · GW

I see, thanks for the teaser :)

I was under the impression that you have rough estimate for some charities (e.g. StrongMinds). Looking forward to see your future work on that.

Comment by shaybenmoshe on The effect of cash transfers on subjective well-being and mental health · 2020-11-21T13:40:34.341Z · EA · GW

Thanks for posting that. I'm really excited about HLI's work in general, and especially the work on the kinds of effects you are trying to estimate in this post!

I personally don't have a clear picture of how much $ / WELLBY is considered good (whereas GiveWell's estimates for their leading charities is around 50-100 $ / QALY). Do you have a table or something like that on your website, summarizing your results for charities you found to be highly effectively, for reference?

Thanks again!

Comment by shaybenmoshe on Have you ever used a Fermi calculation to make a personal career decision? · 2020-11-09T20:54:20.208Z · EA · GW

I recently made a big career change, and I am planning to write a detailed post on this soon. In particular, it will touch this point.

I did use use Fermi calculation to estimate my impact in my career options.
In some areas it was fairly straightforward (the problem is well defined, it is possible to meaningfully estimate the percentage of problem expected to be solved, etc.). However, in other areas I am clueless as to how to really estimate this (the problem is huge and it isn't clear where I will fit in, my part in the problem is not very clear, there are too many other factors and actors, etc.).

In my case, I had 2 leading options, one of which was reasonable to amenable to these kind of estimates, and the other - not so much. The interesting thing was that in the first case, my potential impact turned out to be around the same order of magnitude as EtG, maybe a little bit more (though there is a big confidence interval).

All in all, I think this is a helpful method to gain some understanding of the things you can expect to achieve, though, as usual, these estimates shouldn't be taken too seriously in my opinion.

Comment by shaybenmoshe on Prioritization in Science - current view · 2020-11-04T16:03:02.692Z · EA · GW

I think another interesting example to compare to (which also relates to Asaf Ifergan's comment) is private research institutes and labs. I think they are much more focused on specific goals, and give their researchers different incentives than academia, although the actual work might be very similar. These kinds of organizations span a long range between academia and industry.

There are of course many such example, some of which are successful and somre are probably not that much. Here are some examples that come to my mind: OpenAI, DeepMind, The Institute for Advanced Study, Bell Labs, Allen Institute for Artificial Intelligence, MIGAL (Israel).

Comment by shaybenmoshe on A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) · 2020-10-27T07:39:10.293Z · EA · GW

I just wanted to say that I really like your idea, and at least at the intuitive level it sounds like it could work. Looking forward to the assessment of real-world usage!

Also, the website itself looks great, and very easy to use.

Comment by shaybenmoshe on Hiring engineers and researchers to help align GPT-3 · 2020-10-05T21:32:21.454Z · EA · GW

Thanks for the response.
I believe this answers the first part, why GPT-3 poses an x-risk specifically.

Did you or anyone else ever write what aligning a system like GPT-3 looks like? I have to admit that it's hard for me to even have a definition of being (intent) aligned for a system GPT-3, which is not really an agent on its own. How do you define or measure something like this?

Comment by shaybenmoshe on Paris-compliant offsets - a high leverage climate intervention? · 2020-10-05T20:21:54.026Z · EA · GW

Great, thanks!

Comment by shaybenmoshe on Paris-compliant offsets - a high leverage climate intervention? · 2020-10-05T18:01:59.508Z · EA · GW

Thanks for posting this!

Here is a link to the full report: The Oxford Principles for Net Zero Aligned Carbon Offsetting
(I think it's a good practice to include a link to the original reference when possible.)

Comment by shaybenmoshe on Hiring engineers and researchers to help align GPT-3 · 2020-10-03T20:18:40.689Z · EA · GW

Quick question - are these positions relevant as remote positions (not in the US)?

(I wrote this comment separately, because I think it will be interesting to a different, and probably smaller, group of people than the other one.)

Comment by shaybenmoshe on Hiring engineers and researchers to help align GPT-3 · 2020-10-03T20:15:06.258Z · EA · GW

Thank you for posting this, Paul. I have questions about two different aspects.

In the beginning of your post you suggest that this is "the real thing" and that these systems "could pose an existential risk if scaled up".
I personally, and I believe other members of the community, would like to learn more about your reasoning.
In particular, do you think that GPT-3 specifically could pose an existential risk (for example if it falls into the wrong hands, or scaled up sufficiently)? If so, why, and what is a plausible mechanism by which it poses an x-risk?

On a different matter, what does aligning GPT-3 (or similar systems) mean for you concretely? What would the optimal result of your team's work look like?
(This question assumes that GPT-3 is indeed a "prosaic" AI system, and that we will not gain a fundamental understanding of intelligence by this work.)

Thanks again!

Comment by shaybenmoshe on Does using the mortality cost of carbon make reducing emissions comparable with health interventions? · 2020-09-26T14:15:15.714Z · EA · GW

At some point I tried to estimate this too and got similar results. This raised several of points:

  1. I am not sure what the mortality cost of carbon actually measures:
    1. I believe that the cost of additional ton of carbon depends on the amount of total carbon released already (for example in a 1C warming scenario, it is probably very different than in a 3.5C warming scenario).
    2. The carbon and its effect will stay there and affect people for some unknown time (could be indefinitely, could be until we capture it, or until we got extinct, or some other option). This could highly alter the result, depending on the time span you use.
  2. The solutions offered by top charities of GiveWell are highly scalable. I think the same can not be said about CATF, and perhaps about CfRN as well. Therefore, if you want to compare global dev to climate change, it might be better to compare to something which can absorb at least hundreds of millions of dollars yearly. (That said, it is of course still a fair comparison to compare CATF to a specific GiveWell recommended charity.)
  3. The confidence interval you get (and that I got) is big. In your case it spans 2 order of magnitude, and this does not take into account the uncertainty in the mortality cost of carbon. I imagine that if we followed the previous point and used something larger for comparison, the $/carbon will have higher confidence. However, I believe that the first point at least indicates that the mortality cost of carbon will have a very large confidence interval.
    This is in contrast with the confidence interval in GiveWell's estimates, which is (if I recall correctly) much narrower.

I would love to hear any responses to these points (in particular, I guess there are some concrete answers to the first point, which will also shed light on the confidence interval of mortality cost of carbon).

To conclude, I personally believe that climate change interventions could save lives at a cost similar to that of global dev interventions, but I also believe that the confidence interval for those will be much much higher.

Comment by shaybenmoshe on Keynesian Altruism · 2020-09-16T07:03:54.437Z · EA · GW

I agree that it isn't easy to quantify all of these.

Here is something you could do, which unfortunately does not take into account the changes in charities operation at different times, but is quite easy to do (all of the figures should be in real terms).

  1. Choose a large interval of time (say 1900 to 2020), and at each point (say every month or year), decide how much you invest vs how much you donate, according to your strategy (and others).
  2. Choose a model for how much money you have (for example, starting with a fixed amount, or say receiving a fixed amount every year, or receiving an amount depending on the return on investment in the previous year).
  3. Sum up the total money donated over the course of that interval, and calculate how money you have in the end.

Then, you can compare for different strategies the two values at the end. You can also sum the total donated and the money left, pretending to donate everything left at the end of the interval. Or you could adjust your strategies such that no money is left at the end.

Comment by shaybenmoshe on Keynesian Altruism · 2020-09-14T12:48:26.061Z · EA · GW

Thanks for posting this, this is very interesting.

Did you by any chance try to models this? It would be interesting for example to compare different strategies and how would they work given past data.

Comment by shaybenmoshe on Book Review: Deontology by Jeremy Bentham · 2020-08-13T15:48:08.262Z · EA · GW

Thanks for writing this! I really like the way you write, which I found both fun and light and, at the same time, highlighting the important parts vividly. I too was surprised to learn that this is the version of utilitarianism Bentham had in his mind, and I find the views expressed in your summary (Ergo) lovely too.

Comment by shaybenmoshe on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-11T19:38:39.227Z · EA · GW

I too was surprised when I first read your post. I find it reassuring that our estimates are not far from each other, although the models are essentially different. I suppose we both neglect some aspects of the problem, although both models are somewhat conservative.

I agree that it is probably the case that cell-based meat is very cost-effective at greenhouse gas reduction, and I would love to more sophisticated models than ours.

Comment by shaybenmoshe on Research Summary: The Subjective Experience of Time · 2020-08-11T19:28:07.914Z · EA · GW

Thank you for the eloquent response, and for the pointers to the parts of your posts relevant to the matter.

I think I understand your position, and I will dig deeper into your previous posts to get a more complete picture of your view. Thanks once more!

Comment by shaybenmoshe on The extreme cost-effectiveness of cell-based meat R&D · 2020-08-11T14:56:23.305Z · EA · GW

Thanks for sharing your computation. This highly resonates with a (very rough) back of the envelope estimate I ran for the cost-effectiveness of the Good Food Institute, the guesstimate model is here https://www.getguesstimate.com/models/16617. The result (which shouldn't be taken to literally) is $1.4 per ton CO2e (and $0.05-$5.42 for 90% CI).

I can give more details on how my model works, but very roughly I try to estimate the amount of CO2e saved by clean meat in general, and then try to estimate how much earlier will that happen because of GFI. Again, this is very rough, and I'd love any input, or comparison to other models.

Comment by shaybenmoshe on Research Summary: The Subjective Experience of Time · 2020-08-10T19:55:06.558Z · EA · GW

Thank you for writing this summary (and conducting this research project)!

I have a question. I am not sure what the standard terminology is, but there are (at least) two different kinds of mental processes: reflexes/automatic response and thoughts or experiences which span longer times. I am not certain which are more related to capacity for welfare, but I guess it is the latter. Additionally I imagine that the experience of time is more relevant for the former. This suggests that maybe the two are not really correlated. Have you thought about this? Is my view of the situation flawed?

Thanks again!

Comment by shaybenmoshe on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-08-05T17:49:35.669Z · EA · GW

As someone in the intersection of these subjects I tend to agree with your conclusion, and with your next comment to Arden describing the design-implementation relationship.

However, while thinking about this, I did come up with a (very rough) idea for AI alignment , where formal verification could play a significant role.
One scenario for AGI takeoff, or for solving AI alignment, is to do it inductively - that is, each generation of agents designs the next generation, which should be more sophisticated (and hopefully still aligned). Perhaps one plan to do achieve this is as follows (I'm not claiming that any step is easy or even plausible):

  1. Formally define what it means for an agent to be aligned, in such a way that subsequent agents designed by this agent are also aligned.
  2. Build your first generation of AI agents (which should be lean and simple as possible, to make the next step easier).
  3. Let a (perhaps computer assisted) human prove that the first generation of AI is aligned in the formal sense of 1.

Then, once you deploy the first generation of agents, it is their job to formally prove that further agents designed by them are aligned as well. Hopefully, since they are very intelligent, and plausibly good at manipulating the previous formal proofs, they can find such proofs. Since the proof is formal, humans can trust and verify it (for example using traditional formal proof checkers), despite not being able to come up with the proof themselves.

This plan has many pitfalls (for example, each step may turn out to be extremely hard to carry out, or maybe your definition of alignment will be so strict that the agents won't be able to construct any new and interesting aligned agents), however it is a possible way to be certain about having aligned AI.

Comment by shaybenmoshe on Climate change donation recommendations · 2020-07-19T18:09:30.072Z · EA · GW

I agree with your main argument, but I think that the current situation is that we have no estimate at all, and this is bad. We literally have no idea if GFI averts 1 ton CO2e at $0.01 or at $1000. I believe having some very rough estimates could be very useful, and not that hard to do.

Also, I completely agree that splitting donations is a very good idea, and I personally do it (and in particular donated to both CATF and GFI in the past).

Comment by shaybenmoshe on Climate change donation recommendations · 2020-07-19T12:40:12.948Z · EA · GW

Thanks for sharing your perspective. However, I disagree with the conclusion of not performing these evaluations for that reason (though I think that it might make it harder to analyze and give an accurate answer).

For example, if it turns that GFI is 7 times less effective then CATF, that might mean that GFI is an extremely good donation opportunity for someone who wants to support both animal welfare and climate change mitigation. If it turns out that GFI's impact is 1000 times less effective then CATF, then the impact on climate change is negligible in donating to them.

Knowing the answer to this question could impact many people's donation strategy, especially if they are uncertain about what are the most important causes and prefer a diverse portfolio (like me).

Comment by shaybenmoshe on Climate change donation recommendations · 2020-07-19T11:34:27.758Z · EA · GW

Thank you for writing that up!

Do you (or anyone else) have any cost-effectiveness analysis of CO2e emissions averted (even if very rough) for the charities in appendix 2?
I am particularly interested in estimates for the Good Food Institute impact on CO2e emissions.

EDIT: for future reference, there is a related post on the forum - The extreme cost-effectiveness of cell-based meat R&D.

Comment by shaybenmoshe on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2020-04-29T22:02:04.556Z · EA · GW

Thanks for the follow up!

Comment by shaybenmoshe on Why I'm Not Vegan · 2020-04-11T07:51:50.078Z · EA · GW

I completely agree, and I too was troubled by this analysis. For me, the bottom line is:
The fact that something is of little-to-no cost, does not mean that its moral value is also little.

Furthermore, in cases like reducing animal suffering, one can both avoid being harmful himself (i.e. become vegan) AND donate to relevant charities, rather than OR.

Comment by shaybenmoshe on What should EAs interested in climate change do? · 2020-01-14T07:00:30.116Z · EA · GW

Good to know! Is there any information about Founders Pledge research project?

Comment by shaybenmoshe on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-08T12:35:32.829Z · EA · GW

Oh I see, I misunderstood you.

Thanks, looking forward to the episode to come out.

Comment by shaybenmoshe on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-07T09:44:24.783Z · EA · GW

Thank you for this answer (and the rest of them!). Could you link to that podcast episode on advising?

Comment by shaybenmoshe on [deleted post] 2019-08-26T16:57:57.181Z

First of all, I think it is a reasonable assumption that there are only finitely many people in the universe, in which case the order does not matter.

Also, of course the location of people matter. If you move all earth people at this moment to mars, we will all suffer and die. In the same way, say there 100 other galaxies in which there is another earth. Move all of these people here, and we will have an enormous over population.


Comment by shaybenmoshe on [deleted post] 2019-08-19T13:56:21.307Z

Hey, this might not really help, but here's a rough idea. I can think about it more thoroughly, or we can discuss this at some time, if you want.

Maybe you shouldn't consider the situation as two possibilities of a finite universe or infinite universe. An infinite universe is the limit of finite universes. Perhaps then you should consider an infinite universe as a sequence of finite universes, whose limit is indeed infinite, and work with that.

So for example, you will compute the difference in value in each of the finite universes, and take the limit of that, and use this as the value for the infinite universe.

This is a standard method in math and in physics. For example, in math this is formalized in pro categories (such as profinite groups), and in physics this is a sort of renormalization/regularization.

Comment by shaybenmoshe on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-04T20:40:43.655Z · EA · GW

Not exactly answering your question, but I think this argument (and follow up question) neglect an important aspect of your contribution.

Having more vegetarian and vegan people creates an incentive to develop meat substitutes (e.g. beyond meat, impossible).
If those substitutes, and especially clean meat, will hold up to their promise (i.e. cheaper, taste just as good, and be at least as healthy as regular meat), it will have the potential to change the meat industry dramatically.
In this situation, way more people may become vegetarian or vegan due to economical or health reasons.