Posts

My notes on: Know what you’re optimising for | Alex Lawsen 2022-06-26T11:04:58.969Z
Red-teaming Holden Karnofsky's AI timelines 2022-06-25T14:24:42.910Z
My notes on: A Very Rational End of the World | Thomas Moynihan 2022-06-20T08:50:01.407Z
Are poultry birds really important? Yes... 2022-06-19T18:24:14.436Z
When should the inverse-variance method be applied to distributions? 2022-06-14T14:33:51.656Z
Comparison between the hedonic utility of human life and poultry living time 2022-06-08T07:52:55.743Z
My notes on: Searching for outliers 2022-06-03T16:19:50.746Z
How to determine distribution parameters from quantiles 2022-05-30T15:20:26.300Z
Types of information hazards 2022-05-29T14:30:44.891Z
My notes on: Sequence thinking vs. cluster thinking 2022-05-25T15:03:24.900Z
Do you have impostor syndrome? 2022-05-08T11:23:14.633Z
Donations to effective climate charities needed to cancel lifetime GHG emissions 2022-04-25T09:45:32.637Z
Cost-effectiveness of donating a kidney 2022-04-23T21:50:24.791Z
What is the cost-effectiveness of GiveWell top life-saving charities under total utilitarianism? 2022-04-22T17:22:33.152Z
Summary: The Value of Giving Money to Different Groups | Toby Ord 2022-04-22T10:33:28.296Z
Summary: The Importance of End-of-Life Welfare | Browning & Veit 2022-04-22T10:19:05.378Z
Civilizational vulnerabilities 2022-04-22T09:37:42.770Z
Direct effects of marine plastic pollution on some wild animals seem small 2022-04-19T11:22:30.672Z
My notes on: Why we can’t take expected value estimates literally (even when they’re unbiased) 2022-04-18T13:10:41.896Z
Are there any uber-analyses of GiveWell/ACE top charities? 2022-04-16T10:42:31.156Z
Summary: Evidence, cluelessness and the long term | Hilary Greaves 2022-04-15T17:22:34.555Z
Cost-effectiveness of sending personal messages 2022-03-29T14:38:14.282Z

Comments

Comment by Vasco Grilo (vascoamaralgrilo) on Red-teaming Holden Karnofsky's AI timelines · 2022-06-25T20:10:50.273Z · EA · GW

Thanks for commenting, Peter!

Your median estimate for the conservative and aggressive bioanchor reports in your table are accidentally flipped (2090 is the conservative median, not the aggressive one - and vice versa for 2040).

Corrected, thanks!

I don't think it makes sense to deviate from Cotra's best guess and create a mean out of aggregating between the conservative and aggressive estimates.

I agree. (Note the distribution we fitted to "Bio anchors" (row 4 of the 1st table of this section) only relies on Cotra's "best guesses" for the probability of TAI by 2036 (18 %) and 2100 (80 %).)

The "Representativeness" section is very interesting and I'd love to see more timelines analyzed concretely and included in aggregations.

Thanks for the sources! Regarding the aggregation of forecasts, I thought this article to be quite interesting.

Comment by Vasco Grilo (vascoamaralgrilo) on Are poultry birds really important? Yes... · 2022-06-21T11:37:38.957Z · EA · GW

Thanks for commenting! I am aware of that article, but you have just nudged me to make the calculation. Based on the Weighted Animal Welfare Index of Charity Entrepeneurship:

  • The "total welfare score" (WS) is:
    • Lower than 100 for humans (as that is the defined maximum).
    • -44 for "FF fish – traditional aquaculture".
    • -56 for "FF broiler chicken".
  • The "estimated population size" (P) is:
    • 1 T for "FF fish – traditional aquaculture".
    • 22 G for "FF broiler chicken".
  • Consequently, the total welfare (= WS*P) is:
    • 44 T for "FF fish – traditional aquaculture".
    • 1.2 T for "FF broiler chicken".
    • Lower than 0.80 T (= 100 * 8.0 G) for humans.

This suggests the negative utility of FF fish is:

  • 40 times (= 44/1.232) as large as that of FF chickens.
  • More than 60 times (= 44/0.80) as large as that of humans.
Comment by Vasco Grilo (vascoamaralgrilo) on Michael Nielsen's "Notes on effective altruism" · 2022-06-18T08:47:42.611Z · EA · GW

(I have crossposted the comments below here.)

Thanks for this article! Below are some comments.

Perhaps we need a Center for Effective Effective Altruism? Or Givewellwell, evaluating the effectiveness of effectiveness rating charities.

Note there are some external evaluations of EA-aligned organisations and recommendations made by them. Some examples:

Again: if your social movement "works in principle" but practical implementation has too many problems, then it's not really working in principle, either. The quality "we are able to do this effectively in practice" is an important (implicit) in-principle quality.

I think this is an important point.

This is a really big problem for EA. When you have people taking seriously such an overarching principle, you end up with stressed, nervous people, people anxious that they are living wrongly. The correct critique of this situation isn't the one Singer makes: that it prevents them from doing the most good. The critique is that it is the wrong way to live.

In practice, it is unclear to me how different the 2 critiques are. I would say doing the most good is most likely not compatible with "living in a wrong way", because too much stress etc. are not good (for yourself or other).

Furthermore, the notion of a single "the" good is also suspect. There are many plural goods, which are fundamentally immeasurable and incommensurate and cannot be combined.

"The" good is a very complex function of reality, but why would it be fundamentally immeasurable and incommensurate?

Indeed, the more illegibility you conquer, the more illegibility springs up, and the greater the need for such work.

I am not sure I fully understand the concept of illegibility, but it does not seem to be much different from knowledge about the unknow. As our knowledge about what was previously unknown increases, knowledge about what is still unknown also increases. Why is this problematic?

Comment by Vasco Grilo (vascoamaralgrilo) on AI Could Defeat All Of Us Combined · 2022-06-17T15:43:28.504Z · EA · GW

Won't we see warning signs of AI takeover and be able to nip it in the bud? I would guess we would see some warning signs, but does that mean we could nip it in the bud? Think about human civil wars and revolutions: there are some warning signs, but also, people go from "not fighting" to "fighting" pretty quickly as they see an opportunity to coordinate with each other and be successful.

After seeing warning signs of an incoming AI takeover, I would expect people to go from "not fighting" to "fighting" the AI takeover. As it would be bad for virtually everyone (outside of the apocalyptic residual), there would be an incentive to coordinate against the AIs. This seemingly contrasts to civil wars and revolutions, in which there are many competing interests. 

That being said, I do not think the above is a strong objection, because humanity does not have a flawless track record of coordinating to solve global groblems.

[Footnote 11] I don't go into detail about how AIs might coordinate with each other, but it seems like there are many options, such as by opening their own email accounts and emailing each other.

I can see "how AIs might coordinate with each other". How about the questions:

  • Why should we expect AIs to coordinate with each other? Because they would derive from the same system, and therefore have the same goal?
    • On the one hand, intelligent agents seem to coordinate to achieve their goals.
    • However, assuming powerful AIs are extreme optimisers, even a very small difference between the goals of two powerful systems might lead to a breakdown in cooperation.
  • If multiple powerful AIs, with competing goals, emerge at roughly the same time, can humans take advantage?
    • Regardless of the answer, a war between powerful AIs still seems a pretty bad outcome.
      • A simple analogy: human wars might eventually benefit some animals, but it does not change much the fact that most power to shape the world belongs to humans.
    • However, fierce competition between potentially powerful AIs at an early stage might give humans time to react (before e.g. one of the AIs dominates the others, and starts thinking about how to conquer the wider world).
Comment by Vasco Grilo (vascoamaralgrilo) on Proposal: Impact List -- like the Forbes List except for impact via donations · 2022-06-16T19:39:05.584Z · EA · GW

SoGive has rated about 100 top UK charities with the goal of increasing the amount of effective donations via popularising impact ratings (note this is not the only SoGive project). I can see some similarities with what you are describing:

  • As the impact of the donations of individuals is very much connected to the cost-effectiveness of the organisations to which they donate, I guess producing Impact List might also involve assessing lots of organisations.
  • Both theories of change involve influencing the donations of high-net-worth individuals.
  • Both projects seem quite time-intensive.
  • Ideally, Impact List aims to compare the impact of donations going to organisations in different cause areas, in the same way that SoGive uses the same rating scale for all cause areas.

I encourage you to talk with Sanjay Joshi. I am also volunteering at SoGive, so we can also chat!

Comment by Vasco Grilo (vascoamaralgrilo) on When should the inverse-variance method be applied to distributions? · 2022-06-16T18:19:56.789Z · EA · GW

I will try to illustrate what I mean with an example:

Meanwhile, I have realised the inverse-variance method minimises the variance of a weighted mean of  and  (and have updated the question above to reflect this).

Comment by Vasco Grilo (vascoamaralgrilo) on Valuing research works by eliciting comparisons from EA researchers · 2022-06-16T18:01:38.254Z · EA · GW

That also makes sense to me, as the sum of the shares of total impact adds up to 1, each researcher has the same weight on the combined scores (as with the z-scores).

Comment by Vasco Grilo (vascoamaralgrilo) on When should the inverse-variance method be applied to distributions? · 2022-06-16T10:57:21.921Z · EA · GW

I was thinking about cases in which X1 and X2 are non-linear functions of arrays of Monte Carlo samples generated from distributions of different types (e.g. loguniform and lognormal). To calculate E(X1), I can simply compute the mean of the elements of X1. I was looking for a similar simple formula to combine X1 and X2, without having to work with the original distributions used to compute X1 and X2.

A concrete simple example would be combining the following:

  • According to estimate 1, X is as likely to be 1, 3, 4, 6 or 8: X1 = [1, 2, 3, 4, 5].
  • According to estimate 2, X is as likely to be 2, 4, 6, 8 or 10: X2 = [2, 4, 6, 8, 10].
  • The generation mechanisms of estimates 1 and 2 are not known.
Comment by Vasco Grilo (vascoamaralgrilo) on Differences in the Intensity of Valenced Experience across Species · 2022-06-15T20:59:57.753Z · EA · GW

That makes sense. 

Actually, the difference between the mean and median is much smaller than I expected. For 1/N = 221 M / 86 G = 0.00256 (ratio between the number of neurons of a red junglefowl and human taken from here), the mean and median of a distribution whose 1st and 99th percentiles are 1/N and 1 are:

  • Lognormal distribution ("very concentrated"): 0.1 and 0.05, i.e. the mean is only 2 times as large as the median.
  • Loguniform ("not concentrated"): 0.2 and 0.05, i.e. the mean is only 3 times as large as the median.

The mean moral weight of poultry birds relative to humans of 2 I estimated here is 10 times as large as the one respecting the loguniform distribution just above. This makes me think 2 is not an unreasonably high estimate, especially having in mind that there are factors such as clock speed of consciousness which might increase the moral weight of poultry birds relative to humans, instead of decreasing it as the number of neurons. 

Comment by Vasco Grilo (vascoamaralgrilo) on Valuing research works by eliciting comparisons from EA researchers · 2022-06-15T11:07:11.184Z · EA · GW

Thanks for the post!

Have you considered using z-scores:

  • z("researcher A", "text X") = ("geometric mean of text X according to researcher A" -  mu("geometric means of researcher A"))/sigma("geometric means of researcher A")?

This would ensure that each researcher has the same weight on the combined scores, as the sum of the z-scores for each researcher would be null.

Comment by Vasco Grilo (vascoamaralgrilo) on Differences in the Intensity of Valenced Experience across Species · 2022-06-15T10:19:56.867Z · EA · GW

I think  is a natural choice for the amount by which the intensity is increased and the response is decreased as the mean (or mode?) of a prior distribution, since we use the same factor increase/decrease for each. But this relies on a very speculative symmetry.

I think deriving  from the geometric mean between 1 and N is not the best approch, even assuming 1 and N are the "true" minimum and maximum scaling factors. The geometric mean between two quantiles whose sum is 1 (e.g. 0 and 1) corresponds to the median of a loguniform/lognormal distribution, but what we arguably care about is the mean, which is larger.

Comment by Vasco Grilo (vascoamaralgrilo) on When should the inverse-variance method be applied to distributions? · 2022-06-15T09:23:10.017Z · EA · GW

Thanks for the reply!

I also think the above formula does not formally apply to non-normal distributions, but I was wondering whether it was a good enough approximation.

Is there a simple way of applying the Bayes Rule to two arrays  and  of Monte Carlo samples? I believe this is analagous to considering that all elements of  and  are equiprobable.

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-14T13:22:24.938Z · EA · GW

I would recommend looking into more general treatments of moral uncertainty instead, and just an approach like variance voting or moral parliament, applied to your whole expected value over outcomes, not PH (or HP).

I will do, thanks!

You can estimate the things you want to this way, but the assumptions are too strong, so you shouldn't trust the estimates, and this is partly why you get the average chicken having greater capacity for welfare than the average human in expectation.

Note that it is possible to obtain a mean moral weight much smaller than 1 with exactly the same method, but different parameters. For example, changing the 90th percentile of moral weight of poultry birds if these are moral patients from 10 to 0.1 results in a mean moral weight of 0.02 (instead of 2). I have added to this section one speculative explanation for why estimates for the moral weight tend to be smaller.

If you instead assumed P were constant (although this would be even more suspicious), you'd get pretty different results.

I have not defined P, but I agree I could, in theory, have estimated R_PH (and S_PH) based on P = "utility of poultry living time (-pQALY/person/year)". However, as you seem to note, this would be even more prone to error ("more suspicious"). The two methods are mathematically equivalent under my assumptions, and therefore it makes much more sense to me as a human to use QALY (instead of pQALY) as the reference unit.

Michael, once again, thank you so much for all these comments!

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-14T12:47:54.757Z · EA · GW

I have now contextualised in this section how unusual my results are, and proposed a speculative explanation.

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-14T10:42:04.857Z · EA · GW

Note that if you divide a random variable with units by its variance, the result will not be unitless (it'll have the reciprocal units of the random variable), and so you would need to make sure the units match before adding.

I agree, but I do not expect this to be a problem:

A priori, I would expect any theory of consciousness to produce a mean moral weight of poultry birds relative to humans in pQALY/QALY [or QALY/pQALY]. 

Moreover, if this is not the case, it seems to me that weighting the various moral weight distributions by the reciprocal of their standard deviations (or any other metric, with or without units) would also not be possible:

  • As you point out, the terms in the numerator would both be unitless, and therefore adding them would not be a problem.
  • However, the terms in the denominator would have different units. For example, for 2 moral weight distributions MWA and MWB with units A and B, the terms in the denominator would have units A^-1 and B^-1.

Dividing by the standard deviation or the range or some other statistics with the same units as the random variable would work.

As explained above, I do not see how it would be possible to combine the results of different theories if these cannot be expressed in the same units.

 isn't generally useful for this unless, without further assumptions that are unjustified and plausibly wrong, e.g.  and  are independent.

In order to calculate something akin to  instead of , I would compute S_PH = T*PH*Q + H instead of R_PH = T*PH*Q/H (see definitions here), assuming:

  • All the distributions I defined in Methodology are independent.
  • All theories of consciousness produce a distribution for the moral weight of poultry birds relative to humans in QALY/pQALY.
  • PH represents the weighted mean of all these distributions.

Under these assumption (I have added the 1st to Methodology, and the 2nd and 3rd to Moral weight of poultry), E(R_PH) is a good proxy for E(S_PH) (which is what we care about, as you pointed out):

  • S_PH = (R_PH + 1) H.
  • I defined H as a constant.
  • Consequently, the greater is E(R_PH), the greater is E(S_PH).
Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-13T15:22:25.721Z · EA · GW

Thanks for the reply!

Why normalize by the variance in particular?

I mainly wanted to understand whether you tought the simple fact of attributing weights and then calculating a weighted mean might be intrinsically problematic. Weighting the various moral weight distributions by the reciprocal of their variances is just my preferred solution. That being said:

  • It is coherent with a bayesian approach (see here).
  • It mitigates Pascal's Mugging (search for "Pascal’s Mugging refers" in this GiveWell's article). This would not be the case if one used the standard deviation instead of the variance. For a distribution k X:
    • The mean E(k X) is k E(X).
    • The variance V(k X) is k^2 V(X).
      • Therefore the ratio between the mean and standard deviation is inversely proportional to k.
    • The standard deviation V(k X)^0.5 is k V(X)^0.5.
      • Therefore the ratio between the mean and standard deviation does not depend on k.
  • It facilitates the calculation of the weights (as they are solely a function of the distributions).

You are taking expected values over products of values, one of which is the moral weight, though, right?

I am calculating the mean of R = "negative utility of poultry living time as a fraction of the utility of human life" from the mean of R_PH, which is defined here.

, basically the relative moral weight of chickens wrt humans, in units pQALY/QALY

I think you meant "humans wrt chickens" (not "chickens wrt humans"), as "h" is in the numerator.

, basically the relative moral weight of humans wrt chickens, in units QALY/pQALY

I think you meant "chickens wrt humans" (not "humans wrt chickens"), as "c" is in the numerator.

But even on a fixed theory of consciousness, there could still be empirical uncertainty about , so you shouldn't assume  is fixed.

Let me try to match my variables to yours, based on what I defined here:

  • R_PH (= R_HP), which is what I am trying to calculate, is akin to , not 
  •  is akin to T*Q, where:
    • T = "poultry living time per capita (pyear/person/year)".
    • Q = "quality of the living conditions of poultry (-pQALY/pyear)".
  •  is akin to H = "utility of human life (QALY/person/year)".
  • PH = "moral weight of poultry birds relative to humans (QALY/pQALY)" is .
    • I did not set  to 1, because my PH represents , not .
Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-13T13:38:21.692Z · EA · GW

I gave a poor example (I have now rectified it above), but my general point is valid:

  • The expected value of X should not be calculated by replacing the input distributions by their means.
  • For example, for X = 1/X1, E(1/X1) is not equal to 1/E(X1).
  • As a result, one should not use (and I have not used) expected moral weights.

I agree that the input distributions of my analysis might not be independent. However, that seems a potential concern for any Monte Carlo simulation, not just ones involving moral weight distributions.

Comment by Vasco Grilo (vascoamaralgrilo) on How to pursue a career in technical AI alignment · 2022-06-11T18:46:45.521Z · EA · GW

Thanks for highlighting text in bold!

Comment by Vasco Grilo (vascoamaralgrilo) on Leaving Google, Joining the Nucleic Acid Observatory · 2022-06-11T17:38:55.214Z · EA · GW

Thanks for sharing, congrats!

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-11T10:27:52.273Z · EA · GW

Which of these do you think is problematic (I have clarified above what I would do; see 2nd bullet)?

  • Giving weights to each of the theories of consciousness (e.g. as I described here).
  • Determining the overall moral weight distribution from the weighted mean of the moral weight distributions of the various theories of consciousness.

I might not have been clear about it (if that was the case, sorry!), but:

I think what you're proposing is the maximizing expected choice-worthiness/choiceworthiness approach to moral uncertainty, so you could look for discussions and critiques of that. Or, just more general treatments of moral uncertainty.

Thanks, I will have a look!

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-11T09:13:27.333Z · EA · GW

I agree that the following 2 metrics are different:

  • R_PH_mod = (T*E(PH)*Q)/H.
  • R_HP_mod = (T*Q)/(E(HP)*H).

However, as far as I understand, it would not make sense to use E(PH) or E(HP) instead of PH or HP. I am interested in determining E(R_PH) = E(R_HP), and therefore the expeced value should only be calculated after all the operations.

In general, to determine a distribution X, which is a function of X1, X2, ..., and Xn, via a Monte Carlo simulation, I believe:

E(X) = E(X(X1, X2, ..., Xn)).

For me, it would not make sense to replace an input distribution by its mean (as you seem to be suggesting), e.g. because E(A*B)E(A/B) is not equal to E(A)*E(B)E(A)/E(B). 

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-11T08:50:29.506Z · EA · GW

Right, for f3 = 1000 f1, we would need some kind of information to change the weight of f3 from 50% (= 1/(1 + 1)) to 0.1% (= 0.001/(0.001 + 1)). 

Note that I do not think the starting functions are arbitrary. For the analysis of this post, for example, each function would represent a distribution for the moral weight of poultry birds relative to humans in QALY/pQALY, under a given theory.

In addition, to determine an overall moral weight given 2 distributions for the moral weight, MWA and MWB, I would weight them by the reciprocal of their variances (based on this analysis from by Dario Amodei):

MW = (MWA/V(MWA) + MWB/V(MWB))/(1/V(MWA) + 1/V(MWB)).

Having this in mind, the higher is the uncertainty of MWA relative to that of MWB, the larger is the weight of MWA.

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-10T15:15:49.313Z · EA · GW

Regarding your 2nd reason:

  • A priori, I would expect any theory of consciousness to produce a mean moral weight of poultry birds relative to humans in pQALY/QALY.
  • Subsequently (and arguably "naively", according to Brian, Luke and probably you), I would give weights to each of the theories of consciousness, and then determine the weighted expected moral weight (the rest of this sentence was added after this reply) overall moral weight distribution from the weighted mean of the moral weight distributions of the various theories of consciousness (see here).

If I understand you correctly, you do not expect the above to be possible. I would very much welcome any sources explaining why that might be the case!

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-10T13:15:24.348Z · EA · GW

Thanks for the feedback!

To highlight better my view, I have moved the interpretation of the results regarding the moral weight and quality of the living condition of poultry from the Methodology to the Discussion.

Regarding your 2nd question, assuming that by "plausible" you mean likely, my answer is yes: 

  • The mean and 82th percentile of the moral weight distribution are equal, which translates into a chance of 80% (20%) of the actual moral weight being smaller (larger) than the expected one. 
  • That being said, I tend to think the focus should be on the expected moral weight, not on the quantile of the expected moral weight (although this is also relevant).
Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-10T11:38:20.017Z · EA · GW

Regarding your 1st reason, you seem to be referring to a distinction between the following distributions:

  • PH = "moral weight of poultry birds relative to humans (QALY/pQALY)" (i.e. poultry birds in the numerator, and humans in the denominator).
  • HP = "moral weight of humans relative to poultry birds (pQALY/QALY)" (i.e. humans in the numerator, and poultry birds in the denominator).

However, I think both distributions contain the same information, as HP = PH^-1. E(PH) is not equal to E(HP)^-1 (as I noted here), but R = "negative utility of poultry living time as a fraction of the utility of human life" is the same regardless of which of the above metrics is used. For T = "poultry living time per capita (pyear/person/year)", Q = "quality of the living conditions of poultry (-pQALY/pyear)", and H = "utility of human life (QALY/person/year)", the 2 ways of computing R are:

  • Using PH, i.e. with QALY/person/year in the numerator and denominator of R:
    • R_PH = (T*PH*Q)/H.
  • Using HP, i.e. with pQALY/person/year in the numerator and denominator of R:
    • R_HP = (T*Q)/(HP*H).

Since HP = PH^-1, R_PH = R_HP.

(I have skimmed the Felicifia's thread, which has loads of interesting discussions! Nevertheless, for the reasons I have been providing here, I still do not understand why calculating expected moral weights is problematic.)

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-10T10:48:07.148Z · EA · GW

I agree. That being said, I think most informed intuitions will imply the mean negative utility of poultry living time is at least comparable to that of human life. 

For example, your intuitions about the maximum moral weight might not significantly change the mean moral weight (as long as your intuitions about the intermediate quantiles are not too different from what I considered). Giving 5% weight to null moral weight, and 95% weight to the moral weight following a loguniform distribution whose minimum and maximum are the 5th and 95th percentiles I estimated above, the mean moral weight is 1.

I am also curious about your reasons for setting the maximum moral weight to 20. My distribution implies a maximum of 46, which is not much larger than 20 having in mind that my 95th percentile is 1 M times as large as my 5th percentile.

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-10T10:12:55.537Z · EA · GW

In my view, the weights of f1 and f2 depend on how much we trust f1 and f2, and therefore they are not arbitrary:

  • If we had absolutely no idea about in which function to trust more, giving the same weight to each of the functions (i.e. 50%) would seem intuitive.
  • In order to increase the weight of f1 from 50% to 99.9%, we would need to have new information updating us towards trusting much more in f1 over f2.
Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-09T18:16:58.878Z · EA · GW

Thanks for the comment, and introducing me to that post some weeks ago!

1- Note that I only made the Guesstimate model for illustration purposes, and to allow people to choose their own inputs. I am only using the Google Colab program and this Sheets to obtain the post results.

2- I do not find it odd that the moral weight of poultry birds relative to humans is as likely to be smaller than 2*10^-5 (5th percentile) as to be larger than 20 (95th percentile).

3- I tend to think distributions are more informative because they allow us to calculate the expected (mean) results. It would be possible to compute the mean moral weight from point estimates, but I think it makes more sense to assume a continuous probability density function.

4- I do not understand why calculating the expected value of moral weights is problematic. Brian mentions that:

We could naively try to compute expected utility and say that the expected value of creating two elephants is 50% * f1(two elephants) + 50% * f2(two elephants) = 50% * 1/2 + 50% * 2 = 1.25, which is greater than the expected value of 1 for creating the human. However, this doesn't work the way it did in the case of a single utility function, because utility functions can be rescaled arbitrarily, and there's no "right" way to compare different utility functions. For example, the utility function 1000 * f1 is equivalent to the utility function f1, since both utility functions imply the same behavior for a utilitarian. However, if we use 1000 * f1 instead of f1, our naive expected-value calculation now favors the human

I do not see why "utility functions can be rescaled arbitrarily". For the above case, I would say replacing f1 by 1000 f1 is not reasonable, because it is equivalent to increasing the weight of f1 from 50% (= 1/(1 + 1)) to 99.9% (= 1000/(1000 + 1)).

Comment by Vasco Grilo (vascoamaralgrilo) on Comparison between the hedonic utility of human life and poultry living time · 2022-06-09T17:33:40.682Z · EA · GW

Thanks for the comment!

In the 3rd point of Summary, I mention that that "The conclusions are very sensitive to the moral weight of poultry birds relative to humans, and the quality of their living conditions in factory farms relative to fully healthy life".

"You're saying that we're indifferent been giving a human a year of healthy life, or giving a chicken six months, right?"
Yes, assuming the "six months" would correspond to fully healthy poultry life (in reality, you might need more than 1 chicken).

I am also thinking about writing a short separate post about the mean moral weight, under various distributions, of the animals mentioned in section "Moral weights of various species" of the post from Luke Muehlhauser based on which I modelled the moral weight distribution.

Comment by Vasco Grilo (vascoamaralgrilo) on Beware surprising and suspicious convergence · 2022-06-07T14:54:24.963Z · EA · GW

Your point is very much related to the argument that reducing anti-speciesism decreases s-risk.

Comment by Vasco Grilo (vascoamaralgrilo) on Beware surprising and suspicious convergence · 2022-06-07T14:19:50.897Z · EA · GW

What such a great post, thanks!

Comment by Vasco Grilo (vascoamaralgrilo) on Quantifying Uncertainty in GiveWell's GiveDirectly Cost-Effectiveness Analysis · 2022-06-06T12:30:59.052Z · EA · GW

Thanks for the reply!

0- Thanks for sharing the post about minor nuisances!

1- If one had e.g. the 10th and 90th percentiles instead of the 5th and 95th, would Guesstimate and Squiggle be able to model the distribution in one line? I think one of the major upsides of Python is its flexibility...

2- I agree that converting the quantiles to distribution parameters is the main hurdle. With that in mind, I posted this roughly 1 week ago.

3- Using this spreadsheet to determine the distribution parameters mu and sigma (as explained in the post linked just above), it is also possible to calculate the distribution with one line:
transfer_efficiency = np.random.lognormal(mu, sigma,  N_samples)

Comment by Vasco Grilo (vascoamaralgrilo) on How to determine distribution parameters from quantiles · 2022-05-31T17:28:42.133Z · EA · GW

Great to know that you found it useful!

Comment by Vasco Grilo (vascoamaralgrilo) on How to determine distribution parameters from quantiles · 2022-05-31T15:51:26.609Z · EA · GW

Sorry, and thanks! They are working now.

Comment by Vasco Grilo (vascoamaralgrilo) on How to determine distribution parameters from quantiles · 2022-05-31T15:51:08.263Z · EA · GW

Sorry, and thanks! They are working now.

Comment by Vasco Grilo (vascoamaralgrilo) on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-05-30T10:30:26.187Z · EA · GW

I think "working on the bomb" refers to working towards AGI, and "jobs in arms control" to jobs whose goal is positively shaping the development of AI.

Comment by Vasco Grilo (vascoamaralgrilo) on Quantifying Uncertainty in GiveWell's GiveDirectly Cost-Effectiveness Analysis · 2022-05-30T07:14:02.096Z · EA · GW

Great work!

Do you have any thoughts regarding the advantages of Squiggle over Google Colab? The latter also allows for quantification of uncertainty (e.g. using numpy.random), named references, hypertext, and forking. Squiggle seems better in terms of change requests (although it is possible to make comments in Colab), but Google Colab is based on Python.

Comment by Vasco Grilo (vascoamaralgrilo) on The Fermi Paradox has not been dissolved · 2022-05-28T16:10:50.725Z · EA · GW

I think typical discussions of the Fermi Paradox, such as this one, focus on the observable universe, which is finite. Assuming the universe (observable or not) is infinite, would the probability of intelligent life be 1?

Comment by Vasco Grilo (vascoamaralgrilo) on Some unfun lessons I learned as a junior grantmaker · 2022-05-27T09:48:53.748Z · EA · GW

I liked the comment. Welcome!

I think determining the correlation between the ex-ante assessment of the grants, and the ex-post impact could be worth it. This could be restricted to e.g. the top 25 % of the grants which were made, as these are supposedly the most impactful.

Comment by Vasco Grilo (vascoamaralgrilo) on Simply locate yourself · 2022-05-21T16:32:12.721Z · EA · GW

Thanks for the post! One (oversimplified) analogy I like is thinking about life as a chess game. The best move only depends on the current state of the board!

Comment by Vasco Grilo (vascoamaralgrilo) on AMA: Lewis Bollard, Open Philanthropy · 2022-05-21T15:27:52.490Z · EA · GW

I have a draft post about the Comparison between the hedonic utility of human life and chicken suffering, and would very much welcome your feedback regarding the modelling of the Moral weight of chickens and Living conditions of chickens.

I appreciate that these are very hard to quantify, and that more research is needed, but I think modelling our current beliefs could still be useful to make trade-offs explicit. Ideally, I would like the distributions (of the moral weight and living conditions) to reflect some kind of aggregated "best guess" of the most informed people, and I belive you are amongst the most well position people to know this.

Comment by vascoamaralgrilo on [deleted post] 2022-05-20T08:47:30.380Z

Thanks!

Comment by vascoamaralgrilo on [deleted post] 2022-05-18T20:52:48.840Z

Thanks for building this page! Could you point me to any resources regarding concrete steps for creating an EA group? I found this and this and this, but I think I remember going through a page in the past with concrete steps...

Comment by Vasco Grilo (vascoamaralgrilo) on How good is The Humane League compared to the Against Malaria Foundation? · 2022-05-12T10:23:23.473Z · EA · GW

Thanks for this post!

The current cost-effectiveness of AMF is saving 1 life / 4.5 k$. This implies, according to the Guesstimate model, that the median of the ratio between THL's and AMF's cost-effectiveness is 20, and its mean is larger than 1000.

Comment by Vasco Grilo (vascoamaralgrilo) on See the dark world · 2022-05-09T07:33:43.591Z · EA · GW

Great post!

Comment by Vasco Grilo (vascoamaralgrilo) on My experience with imposter syndrome — and how to (partly) overcome it · 2022-05-08T11:25:35.710Z · EA · GW

You can download this spreadsheet to quickly assess the extent to which impostor syndrome experiences interfere with your life, based on the Clance IP Scale[1].

Comment by Vasco Grilo (vascoamaralgrilo) on My experience with imposter syndrome — and how to (partly) overcome it · 2022-05-08T09:44:14.737Z · EA · GW

Thanks, I really liked this article!

Comment by Vasco Grilo (vascoamaralgrilo) on "Long-Termism" vs. "Existential Risk" · 2022-04-30T20:27:41.992Z · EA · GW

"However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future -- who would all have save-able souls -- could vastly exceed the current number of Christians".

I had thought about the above before, thanks for pointing it out!

Comment by Vasco Grilo (vascoamaralgrilo) on Are there any uber-analyses of GiveWell/ACE top charities? · 2022-04-30T07:17:21.641Z · EA · GW

Thanks for the feedback! I am aware of that guide, but it seems to lack the incorporation of bayesian reasoning (it is highlighted here as an example of an explicit expect value approach).

Comment by Vasco Grilo (vascoamaralgrilo) on Why I am probably not a longtermist · 2022-04-29T12:10:34.901Z · EA · GW

Thanks for the post. Here are some comments (I am confident there is considerable overlap with the other comments, but I have not read them):

  • What was done well:
    • Willingness to challenge EA ideas in order to better understand them and improve them.
    • Points to possibly neglected topics in long-termism (e.g. mitigation of very bad outcomes).
    • Sections “What would convince me otherwise”.
    • Good arguments for why it is uncertain whether the long-term future will be good/bad. 
  • What could be improved:
    • “While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks”.
      • What about x-risks which do not involve extinction? For example, decreasing s-risk would decrease the likelihood of a future with large “misery”.
    • Sections “I do not think humanity is inherently super awesome” and “I am unsure whether the future will be better than today”.
      • Longtermism only requires that most of the expected value of our actions is in the future. It does not rely on predictions about how good the future will be.
    • Section “The length of the long-term future”.
      • Similarly, given the uncertainty about the length of the long-term future (Toby Ord guesses in The Precipice there is a “one in two chance that humanity avoids every existential catastrophe and eventually fulfils its potential”), most of the expected value should concern the long-term.
      • Explicit expected value calculations could overestimate the importance of the long-term. However, a more accurate Bayesian approach could still favour the long-term as long as the prior is not unreasonably narrow.
    • Section “The ability to influence the long-term future”.
      • The concept of s-risk could be mentioned here.