Follow up Q: Should 'this is also to captivate attention' be sometimes understood regarding some materials from the broader EA networks? 2022-01-13T20:19:50.043Z
What are some affordable RCT research organizations? 2022-01-12T21:22:19.167Z
Is the $1–20/life saved by a pneumococcus vaccine still available? At what scale? 2022-01-11T21:01:16.824Z
Should the Aravind Eye Care System be funded in lieu of The Fred Hollows Foundation? 2022-01-11T21:00:54.869Z
Can it be more cost-effective to prevent than to treat obstetric fistulas? 2022-01-10T19:39:18.030Z
What are some responsibility bundle bargains for medium-sized enterprises? 2022-01-10T19:38:56.028Z
Should Founders Pledge support only the non-conscious asset transfer arm of Bandhan’s Targeting the Hardcore Poor poverty graduation program? 2022-01-07T22:54:09.481Z
What are some cost-effective co-interventions (and alternatives) to bednet disbursal? 2022-01-07T22:53:30.441Z
How to select unconditional cash transfer beneficiaries to maximize cost-effectiveness (and what institutional changes can solve recipients’ issues)? 2022-01-07T22:52:59.904Z
Visual analog scale wellbeing data gathering method 2021-11-03T03:01:39.320Z
Weighted wellbeing population theory 2021-11-03T03:01:01.075Z
Wellbeing height and depth visualizations 2021-11-03T02:59:52.528Z
Faith-based groups partnership program pilot 2021-01-28T00:31:39.044Z
Universal cost-effectiveness calculator 2020-10-11T13:54:24.086Z
A counterfactual QALY for USD 2.60–28.94? 2020-09-06T21:45:08.476Z
Data Analysis Involvement Opportunity (~10 hours) 2020-09-06T21:39:30.929Z
Sample size and clustering advice needed 2020-07-29T14:21:02.976Z
EA Cameroon - COVID-19 Awareness and Prevention in the Santa Division of Cameroon Project Proposal 2020-07-25T20:19:01.733Z
Effective Altruism and International Trade 2019-10-15T03:21:37.652Z


Comment by brb243 on Introduction to Effective Altruism (Ajeya Cotra) · 2022-01-20T19:23:43.082Z · EA · GW

Yes, since it is a personal inspiration as opposed to a comparison or an appeal? Except perhaps for the image of the child with leukemia/example of a relatively less impactful charity: it can seem that EA seeks to take away from loving families who would not approve of their children suffering, in order to share with those who are accustomed to their situations. A better image or messaging can be, for example, this blind person and a guide dog image, where the dog seems quite excessive, so can connote that EA seeks to avoid making investments where they are unnecessary.

Comment by brb243 on The Subjective Experience of Time: Welfare Implications · 2022-01-20T19:00:13.045Z · EA · GW

What do you think of every time a reader perceives that the author is posting research to show off as opposed to providing value to the reader or responding to the thesis, the post should be redrafted? This can assist in mitigating reputational loss risk and be especially relevant to innovative ideas. Explaining low-probability possibilities in addition to the unedited writing could be counterproductive in the effort of presenting well-readable research.

Comment by brb243 on Growth and the case against randomista development · 2022-01-20T16:46:08.338Z · EA · GW

This post seeks to start a conversation on the extent to which economic growth and RCT-evaluated programs should be prioritized in EA. It implies that RCT-researched programs relate to health interventions, which does not need to be the case. It suggests trade liberalization as a possible solution, while omitting a discussion on the possible implications of various trade policy strategies. The piece omits a discussion on the potential complementarity between growth and RCT-based programs. The Self-reported Life Satisfaction vs GDP per capita graph uses a semi-log plot, which shows a logarithmic relationship as a line and can thus lead some readers to assign a relatively greater value to increasing GDP at higher income levels. Thus, while this piece can be an excellent conversation starter, it should not inform the prioritization of specific interventions.

Comment by brb243 on The Narrowing Circle (Gwern) · 2022-01-20T15:34:00.085Z · EA · GW

This piece examines the accuracy of Peter Singer’s expanding moral circles theory by reasoning and examples. Since the validity of this arguably fundamental thesis can have wide implications, I recommend this post.

Comment by brb243 on The Subjective Experience of Time: Welfare Implications · 2022-01-20T15:23:45.946Z · EA · GW

(This is not a comprehensive Decade Review) Related to the theory of weirdness points, after someone reads: “As of June 2020, I estimate there is approximately a 40% chance that [the critical flicker-fusion frequency] roughly tracks the subjective experience of time,” they can dismiss the entire sequence as seeking to trick readers to join the excitement of the author or the organization about something which the entity itself evaluates as probably inaccurate. Similarly, the cortical oscillations section can be perceived as an exhibit of obscure literature rather than developing an answer to the thesis. Thus, I recommend truncating an easily accessible version of this piece to sections which seem to answer the question and presenting other ideas as a list of topics open to others’ research.

Comment by brb243 on Why I prioritize moral circle expansion over artificial intelligence alignment · 2022-01-20T13:30:05.934Z · EA · GW

Hm, 1) how do you define a bias? What is your reference for evaluation whether something is biased? The objective should be to make the best decisions with available information at any given time while supporting innovation and keeping 'openminded.' This 'bias' assessment should be conducted to identify harmful actions that individuals deem positive due to their biases and inform overall prioritization decisionmaking rather than seeking to change one's perspectives on causes they prefer. This can contribute to systemic change and optimal specialization development by individuals. This is a better way to approach biases.

The section on cooperation discourages collaboration because it understands cooperation as asserting one’s perspectives where these are not welcome rather than advancing ventures. The part also states: “insofar as MCE is uncooperative, I think a large number of other EA interventions, including AIA, are similarly uncooperative.” These author’s assumptions, if not critically examined against evidence, can discourage persons who could be seeking encouragement to cooperate with others in this article from doing so, because one may wish to avoid sharing perspectives where they are not welcome.

An invitation for critical discussion can include an argument for the writing’s relevance to the development of answers to open-ended questions. But I can agree with your point that this can be superfluous, so would add (added) prima facie and edited the conclusion.

Comment by brb243 on Reducing long-term risks from malevolent actors · 2022-01-19T23:33:13.451Z · EA · GW

Yes, if the community focuses on preventing people with the specified personality traits from gaining influence, it may become unpopular among prominent circles (not because leaders would necessarily possess these traits but because they may seek to present in agreement with the thinking, in order to keep their authority). Non-decisionmakers could be hesitant to publicly support frameworks which target actors which can be understood as threats, due to the fear of negative consequences for them. Thus, I am suggesting that while it is important to advance global good, this should be presented in a less conflicting manner.

By "keeping true to their beliefs" I mean fundamental righteousness/honorable behavior convictions. Yes, we would want an opportunity where they can be truly welcome and acknowledged for their actions. Or, working with their beliefs without triggering their malevolence on a large scale, since all decisions may have some costs in addition to their benefits – this is also meant by developing robust institutions that prevent powerful malevolent actors from causing harm.

An example can be a dictator who favors their own tribe. Then, if they can agree to their fundamental conviction as benefiting their own persons, and be excited about the modern legal personhood research, then they can gain satisfaction from skillfully manipulating their forces to achieve their objectives, being admired for that, and hurting their interlocutors in debates. If the dictator does not agree with interpreting their beliefs in a modern way, then prospects of the economic and geopolitical benefits of taking measures which consider different individuals to their own tribe can be introduced and less harm is caused.

Comment by brb243 on Why I prioritize moral circle expansion over artificial intelligence alignment · 2022-01-19T22:45:22.738Z · EA · GW

While the central thesis to expand one’s moral circles can be well-enjoyed by the community, this post is not selling it well. This is exemplified by the “One might be biased towards AIA if…” section, which makes assumptions about individuals who focus on AI alignment. Further, while the post includes a section on cooperation, it discourages it. [Edit: Prima facie,] the post does not invite critical discussion. Thus, I would not recommend this post to any readers interested in moral circles expansion, AI alignment, or cooperation. Thus, I would recommend this post to readers interested in moral circles expansion, AI alignment, and cooperation, as long as they are interested in a vibrant discourse.

Comment by brb243 on EA Diversity: Unpacking Pandora's Box · 2022-01-19T22:20:16.187Z · EA · GW

Hm, the 2016 post also looks independent, and possibly informing the CEA's official stance. The 2017 piece and the 2019 post of the same author also seem to build on other diversity writing to a relatively low extent. The inclusion of diversity in the CEA's 2020 planning could have advanced the discussion from 2016 as well as responded to general community discourse.  The EA Survey collects general demographic data, so may not seek to examine the community diversity based on people's talent, experience, opinion, and appearance.

Comment by brb243 on The Drowning Child and the Expanding Circle · 2022-01-19T22:07:05.544Z · EA · GW

This framing of the “drowning child” experiment can best appeal to philosophy professors (as if hearing from a friend), so can be shared with this niche audience. Some popular versions include this video (more neutral, appropriate to diverse age audiences) and this video (using younger audience marketing). This experiment should be used together with some more rational writing on high-impact opportunities and engagement specifics in order to motivate people to enjoy high-impact involvement.

Comment by brb243 on Reducing long-term risks from malevolent actors · 2022-01-19T21:53:52.532Z · EA · GW

Prima facie, this looks like a thoroughly researched innovative piece recommending an important focus area. The notions of preventing malevolent actors from causing harm if they rise to power by developing robust institutions, advancing benevolent frameworks to enable actors join alternatives as they are gaining decisionmaking leverage, and creating narratives which could inspire malevolent actors to contribute positively while keeping true to their beliefs are not discussed. Thus, using this framework for addressing malevolence can constitute an existential risk.

Comment by brb243 on The world is much better. The world is awful. The world can be much better. · 2022-01-19T21:38:38.859Z · EA · GW

This is an emotionally neutral introduction to the thinking about solving global issues (compared to, for example this somewhat emotional introduction). The writing uses an example of one EA-related area while does not consider other areas. Thus, this piece should be read in conjunction with materials overviewing popular EA cause areas and ways of reasoning to constitute an in-depth introduction to EA thinking.

Comment by brb243 on Why I've come to think global priorities research is even more important than I thought · 2022-01-19T21:19:04.965Z · EA · GW

This post advocates for greater prioritization of global priorities research, including questions related to longtermism, because such research can increase the impact of the EA community.

The Positive recent progress section implies that research is thought of as traditional philosophical academic journal paper and similar writing and further suggests that innovative discourse happens predominantly within academia. This thinking could hinder progress within the GPR field.

The Implications of longtermism and Patient longtermism sections can be interpreted as seeking to gain popular attention as compared to inviting collaborative engagement of readers. This can discourage readers interested in helping others from engaging with the ideas presented in the piece.

The Relative neglectedness part makes a comparison of the recent growth of resources dedicated to GPR and AI safety in the EA community. While this can be a true statement, it does not imply that GPR should be prioritized: rather than the comparison of the increase in resources deployed in each field, the marginal value of additional GPR effort should be considered.

The Scale of the community part assumes that EA resources are always perfectly mobile and thus neglects the notion of suboptimal institutionalized thinking within EA due to GPR deprioritization at any earlier stages, which can have significant negative implications also outside of EA, due to the community leverage.

The Importance of ideas segment states limited interest of some academics in GPR research, while omitting a broader reflection on recent development in global re-prioritization.

The How might you contribute? section uses a language that connotes the author’s request that readers contribute by 1) identifying GPR topics, 2) asking others to conduct GPR, 3) applying for a junior role at a GPR research organization endorsed by 80,000 Hours, 4) researching the importance of a relatively unexplored issue, or/and 5) donating. Using a request language may hurt readers’ prioritization rationality and reduce their long-term engagement with the field.

Thus, this piece makes a sincere appeal but lacks robust arguments to support its thesis.

Comment by brb243 on EA Diversity: Unpacking Pandora's Box · 2022-01-19T19:55:31.772Z · EA · GW

This post seems to have started a conversation on diversity in EA:

While I recommend the CEA’s stance as the resource on diversity, this piece to an extent elucidates the rationale for diversity and provides some elaboration on the meaning of this term.

Comment by brb243 on On Caring · 2022-01-19T17:51:29.825Z · EA · GW

This post introduces the notion of efficiently improving others’ wellbeing as an emotional burden. This should not be necessary; decisionmakers can focus on developing beneficial solutions with great subjective perceptions. Thus, I suggest that EA-related opportunities are marketed as cool and important as compared to challenging and sometimes overwhelming.

Comment by brb243 on Radical Empathy · 2022-01-19T16:31:14.776Z · EA · GW

This writing may be addressing the risk of rejection of EA due to its consideration of individuals. Prima facie, the post claims that “we don't extend empathy to everyone and everything,” but it implies that the opposite should be the case with one’s thinking development. The post seeks to gain authority by critiquing others. It does not invite collaborative critical thinking.

Thus, readers can aspire to critique others based on their ‘extent of empathy,’ which can mean the consideration of broad moral circles, without developing the skill of inviting critical thinking on various EA-related concepts.

This readers’ approach can constitute a reputational loss risk to the EA community among creative problem solvers while inviting hierarchically-minded persons who seek to gain status by reproducing notions.

While inviting participants in hierarchical structures which do not celebrate critical thinking may be important for the EA community, I suggest that the critical thinkers in these systems are invited first. Thus, I recommend that a critical discussion is joined with this piece, in order to encourage critically thinking hierarchically-minded individuals to share EA-related ideas in their networks in a way favorable to these networks’ participants.

Comment by brb243 on 500 Million, But Not A Single One More · 2022-01-19T16:04:56.413Z · EA · GW

This piece does not mention the work of William Foege, who managed the smallpox eradication program globally, using Viktor Zhdanov’s research. More broadly, it celebrates the work of an arbitrary group of people who contributed to smallpox eradication. Thus, the post does not motivate contributing to the development of mutually beneficial solutions with one’s comparative advantage considering the sustainable wellbeing of individuals. So, this writing should be offered as a critical thinking exercise after readers understand the basics of EA, as opposed to an inspirational piece which should determine one’s motivations.

Comment by brb243 on Opinion: Estimating Invertebrate Sentience · 2022-01-19T15:05:49.140Z · EA · GW

This post summarizes the 2019 perspectives of the authors (some of whom continue to lead the Rethink Priorities research on animal sentience) on whether different taxa are sentient. The writing uses relatively little references to sentience studies, especially for taxa of which sentience has not been extensively researched. The piece does not include a definition of sentience, although the ability to experience pleasure and pain and to have emotions is suggested in different parts of the writing. The post does not introduce the notion of the intensity of conscious experiences. The writing emphasizes the importance of further research but does not summarize specific questions.

In conclusion, this post presents several Rethink Priorities researchers’ opinions on an important topic while specifying that Rethink Priorities should continue its research. Because the post does not introduce opportunities for further involvement and since some of the authors elaborated on this writing in 2020, I suggest that readers interested in animal sentience explore more current writing on this topic.

Comment by brb243 on Community vs Network · 2022-01-19T14:23:07.286Z · EA · GW

This post introduces the idea of structuring the EA community by cause area/career proximity as opposed to geographical closeness. Persons in each interest-/expertise-based network are involved with EA to a different extent (traditional CEA’s funnel model) and the few most involved amend EA thinking, which is subsequently shared with the network.

While this post offers an important notion of organizing the EA community by networks, it does not specify the possible interactions among networks and their impact on global good or mention sharing messages with key stakeholders in various networks in a manner that minimizes reputational loss risk and improves prospects for further EA-related dialogues. Further, the piece mentions feedback loops in the context of inner-to-outer career advice sharing, which can be a subset of overall knowledge sharing.

Thus, while I can recommend this writing to relatively junior community organizers who may be otherwise hesitant to encourage their group members[1] to engage with the community beyond their local group as a discussion starter, this thinking should not be taken as a guide for the organization of EA networks, because it does not pay attention to strategic network development.

  1. ^

    who understand EA and are willing to share their expertise with others in the broader EA community so that the engagement of these persons outside of their local group would be overall beneficial

Comment by brb243 on [linkpost] Peter Singer: The Hinge of History · 2022-01-18T21:18:35.751Z · EA · GW

Maybe the solution is to institutionalize a sustainable system positive for all. That can be enjoyed by both Singer and Karnofsky. Possibly, Peter Singer emphasizes ‘making sure that the future is good for individuals,’ which is a thought that Holden Karnofsky seeks to provoke[1] in more individuals whose interest was originally captivated by high-tech solutions which benefit a few elites. 

  1. ^

    Holden Karnofsky specifies the “appropriate reaction” to the most important century thesis as "... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one."

Comment by brb243 on Minimal-trust investigations · 2022-01-18T15:15:37.069Z · EA · GW

I would include the productivity of the reviewers and the scope of the investigations as factors of the time spent evaluating the evidence. For example, an investigator who analyzes the accuracy of key assumptions 10x faster and incorporates a 10x wider viewpoint can get 100x better conclusions than another reviewer spending the same time.

I would also conduct an expected value cost-benefit analysis in deciding to what extent minimal-trust investigations’ insights are shared. For example, if EA can lose $1 billion because of outlining the questions regarding LLIN effectiveness with a 50% chance, because it loses appeal to some funders, but can gain $2 billion with 10% chance which can be used 3x more cost-effectively, then the investigation should be shared.

If a better solution exists, such as keeping the LLIN cost-effectiveness as a cool entry point while later motivating people to devise solutions which generate high wellbeing impact across futures, then the LLIN questions can be shared on a medium accessible to more senior people while the impressive numbers exhibited publicly.

Then, using the above example, EA can lose $1 billion invested in malaria with 90% likelihood, develop a solution that sustainably addresses the fundamental issues (astronomically greater cost-effectiveness than LLINs because of the scale of the future), and gain $10 billion to find further solutions.

The question can be: can you keep speaking about systemic change intentions but difficulties with OPP while dropping questions so that the development and scale up of universally beneficial systemic solutions is supported?

Comment by brb243 on How much money donated and to where (in the space of animal wellbeing) would "cancel out" the harm from leasing a commercial building to a restaurant that presumably uses factory farmed animals? · 2022-01-18T12:58:18.989Z · EA · GW

I am assuming that in the not leasing to restaurants case, these businesses would not find additional central locations (so would not operate) and that those who would otherwise dine in these central restaurants would purchase meals from less central locations or non-restaurant stores. These alternative meals would use more animal products (presumed for at home cooking and as a competitive strategy of restaurants that cannot market a central location).

In the leasing to restaurants scenario, assuming excess demand in the non-restaurant case, the price would fall with an increased supply. This could motivate those who would have otherwise bought less central or non-restaurant meals to purchase the central meals with less animal products. This indicates that leasing to a restaurant would benefit animal welfare.

This reasoning does not consider the leverage that a central restaurant can have. For example, if this is a very cool and affordable vegan place, then it can motivate other competitors to introduce vegan options. Similarly, if this is a horrible steakhouse that exhibits animal suffering and workers’ poor standards, then other meat restaurants could suffer a reputational loss and plant-based restaurants could increase their market share. So, if it is possible to select renters, then some cool healthy vegan restaurant chain, which uses marketing to retain customers.

I am further assuming that 100% of non-restaurant tenants would operate from other locations and that this would not affect their operations or profit.

Comment by brb243 on Plan for Impact Certificate MVP · 2022-01-17T22:45:03.192Z · EA · GW

Would the price of an impact certificate be determined by the impact the organization is generating, so it is like buying impact stocks? A maximum number of impact certificates should be specified upon the first certificates’ issuance to prevent the possibility of infinite certificate value dilution, which could disincentivize investments. Buyers should invest in organizations of high expected impact/cost. Organizations would seek to increase their impact/cost ratio to sell certificates at higher value. This would incentivize growth (appealing investors) and innovation (selling certificates at a higher price).

The impact units can be proxies to wellbeing-adjusted life years, such as counterfactual education achieved, healthcare affected, animal welfare determinants implemented, etc. It should be ok to have a few units, just like several currencies.

This would constitute a market solution to systemic change. For example, when education increase becomes more affordable, the price of an education impact certificate increases (higher productivity of an organization). Then, investors may be interested in purchasing healthcare certificates (which are not experiencing a price surge) while waiting for the price of education certificates to fall due to increased supply.

I think that this will be extremely useful for systemic change, can be profitable to investors apt in impact prediction, and can benefit entrepreneurs skilled in impact cost reduction.


I also thought of certificates which would pay a premium if a public benefit threshold is met. This could also use the EA broader community prediction capacity in a profitable way. The certificate premium would have to be set so that:

[Issuer’s expected loss] Certificate price*Certificate premium %*Probability of the threshold with this premium incentive being met


[Issuer’s expected profit] Certificate price*Financial return on investment from the time of issue to the time of evaluation


[Additional public benefit] Expected public benefit of issuing the certificate additional to a no certificate incentive scenario

Comment by brb243 on Six lessons learned from our first year - Animal Ask · 2022-01-17T20:21:38.584Z · EA · GW

Just randomly, I listened to a podcast that specifies an argument against free-range chicken raising (apparently, they (would) kill each other (to prevent the killing, parts of their beaks are cut). Other animals can also feel irritated and thus be aggressive. It seems that exploration within an ample space should be possible. This can motivate more positively perceived interactions.

I also saw this slaughter manual that described low-cost non-stun slaughter alternatives (also, proper stunning equipment recommendations can be cost-effective in reducing suffering due to slaughter). Do you think that a hammer blow could be perceived as well as a stun (this can be particularly relevant to areas that use other traditional slaughter methods, where animals do not lose consciousness).

Do you think that animals can perceive premature death positively because they do not experience getting old? For example, if animals understand that they can explore and interact with others, have a family that will be well because the industrial farm is made for that, and will never get old, then they can have all they want (also if health issues are prevented).

To what extent do animals perceive power dynamics, among its own species and related genera/families and higher taxa? Apparently, even if a human walks into a chicken barn, the chicken change their behavior. To what extent these dynamics determine the animals’ wellbeing? Do animals have a Maslow’s hierarchy of needs (so that as long as they are physically well and safe, they can enjoy interactions) or use the Max-Neef’s matrix? 

How do animals’ needs/preferences depend on the species? For example, is it that some insects are only concerned about finding food, getting into a good temperature and humidity, protecting their bodies, and reproduction? Or, is it that even crickets, if they are being eaten by others (maybe because they lack other food) experience a negative feeling from the interaction, in addition to the physical pain?

Can some animals, such as insects, ‘transcend’ the negative feelings or physical pain from being eaten by an understanding of cooperation (the capacity for contribution outside of one’s family can be limited for some non-human animals). Would this depend on whether these decisions are made by one’s group (a same species group, small group of different species, or an ecosystem, depending on perception) or by ‘other’ animals? Is it that the crickets that are being eaten by others because they do not ‘go with the flow’ as well perceive it better than those who are eaten by a bird?

What are some experimental methods to determine whether an animal would have prefered to exist, all else equal? Is it possible that animals do not consider suicide because they believe that they need to multiply in order to evolve? If so, is this still needed, given the relatively rapid evolution by coordinated human systems, accumulation of knowledge, and use of technology? For example, is there something more fundamental to develop than what humans can in this way achieve?

In sum, what do different animals fundamentally need and how can this be provided at a low cost while keeping the benefits from animal products?

Comment by brb243 on Pathways to impact for forecasting and evaluation · 2022-01-17T16:13:52.880Z · EA · GW

I think that for forecasting, the key would be shifting users’ focus toward EA topics while maintaining popularity.

For evaluations, inclusive EA-related metrics should be specified.

Comment by brb243 on Improving Institutional Decision-Making: Which Institutions? (A Framework) · 2022-01-17T15:50:14.950Z · EA · GW

A comment on your calculation: Is it 


where P() is the probability of the total (across all times) WELLBY gains and losses?

Is there a probability threshold value that can inform whether a strategy is recommended? For example, if there is a 0.1% chance of success (assuming no expected WELLBY loss), would you refrain from endorsing the strategy, regardless of the size of the WELLBY gain? Or, if there is a 3% chance of a significant WELLBY loss, even if that is outweighed by the magnitude of the expected WELLBY gain, would you suggest involvement in that strategy?

Which counterfactuals are you considering? The alternative use of resources to involvement in a strategy (or, you are mapping all involvement combinations and just selecting the best one) and the WELLBY lost and gained due to inaction or limited involvement in any strategy (this can be relevant especially to institutionalized dystopia or partial dystopia for some groups)?

Are you converting all non-financial constraints into cost? For example, the cost of paying people to develop networks (for example, by reducing their workload), the cost of developing convincing narratives in a low-risk way, the cost of developing solutions, the amount needed to flexibly gain influence momentum in relevant circles as opportunities arise (if this is needed, maybe this can fall under network development, but more internal as opposed to external advocacy). What else is needed to influence decisions which can be generalizable across different political systems?

How are timeframes considered in your model? For example, if developing different networks takes various amounts of time (assuming equivalent cost and expected WELLBY gains and losses), which one do you choose?

To what extent do you aim for impartiality in wellbeing achieved in terms of individuals? How are you relatively weighting (different amounts of) suffering and wellbeing?

Is continuous optimization assumed? For example, if the predicted WELLBY loss probability increases or decreases after some steps, are you re-running your calculations?

In addition to your calculation, I wanted to ask about the institutions that you would suggest that EA prioritizes in its influence. I think, for example, the UN because the institution already seeks to benefit others and reputational loss risk from offering innovative solutions can be limited (these solutions may just not be perpetuated to more influential ranks) (the advocacy should be for the One Health approach (which includes non-human animals) and developing preventive suffering frameworks in addition to convening decisionmakers when issues escalate), developing countries governments (because they can be highly underresourced and could use skilled volunteer work for various tasks, while EA-related tasks could be prioritized by volunteers), in addition to the governments of major global economies (assuming that economic and military power is convertible) and large MNC (because they influence large proportions of global production).

In summary, what specific strategies are you recommending?

Comment by brb243 on doing more good vs. doing the most good possible · 2022-01-17T13:42:43.860Z · EA · GW

Ah, I see. Yes, normalizing altruism with an effectiveness mindset within the work people are doing may be a more robust solution than inviting a limited resource budget allocation for almost 'external' EA ventures.

Comment by brb243 on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T23:24:50.603Z · EA · GW

Similarly to my comment below, if you ask the public about AI ethics and risks, they can first think about themselves, even suppressing their reasoning due to fear. One should also not bore/deter people by 'what should AI do to be nice to your friends, even those who are not' but keep an understanding of prestige and importance in answering this question.

Thus, the question could be presented as a mental challenge that can inform policy regulating safe AI development, considering that the result would be superhumanly intelligent AI with extensive productive capacity and decisionmaking power and possibly understanding and being able to influence/motivate the needs of all living beings.

Then, one can think about, in the scenario of abundance and actually being able to enact good decisions, what would an intelligent entity seek to motivate so that the needs of all living beings are catered to. Maybe the needs being cooperation and skillfully increasing others wellbeing?

Comment by brb243 on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T22:59:43.353Z · EA · GW

The issues here can be that

  • respondents can omit morally further individuals in their considerations 
    • so a good future for all sentient beings would have to be specified or skillfully implied by questions leading to this one
  • replies can only include what respondents can imagine, can be influenced by popular media portrayals, and  may focus on solving personal problems
    • for example,  a person in a negative relationship who sees commercials that portray a product providing positive emotions may reply that people [like them] would have products to feel good - this does not identify the fundamental problems or include others' needs
      • thus, the ability to empathize with different alternatives, even those not commonly portrayed, should be offered and inclusivity in mental image invited
  • advanced economies' public may seek to secure a good future for itself, even risking the institutionalization of norms negative or suboptimal for others
    • thus, this survey should be run in different parts of different economies
      • but then, especially in emerging economies, ways to mitigate experimenter bias should be found

So, I would pay attention to the questions specifics to make appropriate thought-through recommendations to policymakers.

The alternative is to have a broad solution, such as continuously improving the welfare of all sentience, and then ask the public to endorse it by a sequence of type/wording of questions or solicit answers that can accommodate the inclusion of consideration/can be interpreted as such (for example, 'do you want future good for all?' - yes!) - this should have much lower critical thinking requirements.

Comment by brb243 on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T15:08:12.340Z · EA · GW

First, one should ask what non-elites can do to make great positive impact. What comes to mind is donating, learning about EA, developing solutions, and presenting them to their networks. In addition, I was thinking about doing in-network elites' work so that the privileged individuals can more fully focus on EA-related advocacy within their circles.

Why one would seek to refrain from approaching the public is that 1) reputational loss risk based on a public appeals to reject EA, 2) upskilling relatively large numbers of persons whose internal professionalism standards do not reflect those of global elites in time-effective communication norms requires specialized capacity investment and 3) sharing EA concepts in depth with a large number of individuals would constrain experts in the community. 

There should be people who can (2) coach relevant professional communication while maintaining openness to an individual's expression and (3) people can be encouraged to engage with more senior people only after they extensively learn on their own and with peers, so EA should have the capacity to address these two concerns. 

The remaining challenge in approaching the non-elite public is (1) minimizing reputational loss from public appeals to reject EA. This can be done by avoiding individuals who would be more likely to advocate against EA and developing narratives where such public rejections would benefit the community.

Thus, some relevant questions can cover opinions on the idea of continuous pro bono learning on how to benefit others to a greater extent, perspectives on preferred learning models, linking social media posting and EA-related learning motivations, and ads that would motivate respondents' peers to start learning. Then, the appropriate ads can be offered to low reputational loss risk and high participation potential audiences based on their social media activity.

In addition to gathering data on what advertisements would invite the right people to the community, I thought of gaining the determinants of persons' wellbeing in order to identify possible win-win solutions and conducting a network analysis to target nodes of influence that have the greatest wellbeing impact.

Comment by brb243 on ImpactMatters was acquired by CharityNavigator; but it doesn't seem to have been incorporated, presented, or used in a great way. (Update/boost) · 2022-01-14T00:31:07.671Z · EA · GW

I think that intra-area comparisons may be ok, also due to acceptability by users, who may think traditionally about charity (for example, open Charity Navigator because they received an emotional appeal from a US-based charity), only more areas which relate to EA causes should be included and listed in a (strategic) combination with non-EA cause areas (more clicks) (maybe the impact units and top charity per impact unit should be listed alongside or so - $40,000 to train a guide dog for 1 person, $100 to restore eyesisght of 1 person, etc), and more areas which involve EA and non-EA charities should be included (e. g. providing meals to the hungry as opposed to meals in US homeless shelters - then, for example, Yemen Aid, which provides 6.4 meals/$1, can compete with $2/meal) - the 'at the risk of famine' and 'food insecure' distinction is lost by this grouping but this can be included in further description. This can be sold to Charity Navigator as greater total impact. Potentially some computing capacity of some persons in EA looking to volunteer, or in companies, such as Google, to improve the popularity of CN can come in a bundle? The risk is that Charity Navigator will lose funding influence share due to less captivating entertainment of users interested in charity assessment but ways for the opposite should be developed.

You should speak with Sanjay to speak with Elijah Goldberg or Tamsin Chen.

Podcasters can ask for a brief interview with Dean Karlan, for example to elaborate on ways of motivating charities, especially those in neglected cause and geographical areas with high potential, to develop solutions that support systemic change toward institutions that safeguard continuous improvement of living standards of an increasing number of individuals,  and test these solutions affordably. [For example, should they present data in a specific format to Charity Navigator to get high rating, or emulate and improve the approaches of top charities in order to gain further funding?] - the last time I checked (asked the Charity Navigator 'agent'), only US-based charities (including internationally operating ones who have only a foundation in the US) were evaluated, due to the use of tax statements, but the incorporation of international organizations' evaluation was considered. Elijah was also thinking of developing a simple questionnaire to assess impact.

Also, Northwestern students can inquire regarding a class project similar to the Pro bono student impact audit opportunity shared by ImpactMatters in 2020 to develop a rapid evaluation framework within select causes. This may already exist.

In terms of a thorough but more affordable and impactful evaluation, I was/am suggesting (linked in this post) that

  • counterfactual beneficiaries' outcomes are estimated by increasingly more robust methods (organizations may be interested in accurate predictions due to estimates' records and later data validation)
  • funders' alternative investments are considered (e. g. personal investment vs. funding of a comparable program)
  • in-program's-absence caretakers' alternative investments are considered (e. g. into necessities vs. luxuries vs. 'independence' capital development)
  • pro bono impact increase consultancy is offered
    • at least for charities which can increase their cost-effectiveness with relative facility, such as by geographical expansion or animal welfare considerations

I was also thinking about measuring subjective wellbeing changes via the visual analog scale method as the end goal. This, if given also to individuals (hypothesized for non-humans) who would benefit indirectly from an externality or another change, could address the issue of an organization addressing multiple issues on priority needs basis and thus having difficulties to focus on one or several impact metrics and may detect systemic change better.

The key would be to get organizations to fill a form, such as by automatically making an appealing annual report for them, offering discounts on evaluation and fundraising consultancy (such as narrating competitiveness due to higher impact), or the Charity Navigator listing.

To conclude,  the current rating works well, but important additionalities may be available.

Comment by brb243 on doing more good vs. doing the most good possible · 2022-01-13T22:30:25.227Z · EA · GW

What do you think of:

Effective altruism is a philosophical and social movement where people use reasoning and evidence to do the most good with some of their resources. The key is 'with some of their resources.' It is better than saying 'with their spare resources,' because if one approaches e. g.  30% of their networks to implement EA-related changes, then the definition would be inaccurate because these networks are not 'spare;' it is also better than saying 'with their philanthropically allocated resources' or something else that includes 'allocated' because that connotes staticity of the extent and a distinction between philanthropically and non-philanthropically 'utilized' 'resources' (which may make the definition inaccurate when someone uses some of their resources partly  in a good maximizing manner and partly in another way). 

This definition can be also acceptable to people who can think about how much of resources they can do good with and how, as opposed to thinking in absolutes or competitive terms.

The issue can be that 'some of their resources' is vague, so the institutional understanding of an ok extent of resources (any which does not prevent people from engaging in what truly makes them happy, such as sharing great times with others, and having a facile life?) needs to be maintained.

I am also writing 'reasoning and evidence' (first 'reasoning,' also as opposed to 'reason,' and then 'evidence') because thus readers can understand the process of coming up with solutions based on the information that they find and can trust as opposed to being given evidence (which connotes incriminating evidence at court) and being empowered to assert it by 'reason' (implying a framework under one which is right and thus should not be argued against).

Comment by brb243 on The big problem with how we do outreach · 2022-01-13T21:09:29.690Z · EA · GW

I think that the presented idea is to get people care about others without making them reject the idea by an appeal to rationality. The discussion on pure rationality probably not being the answer either is a great 'compromise.' (One can say that respect is paid to the persons who are thus able to argue against or that the audience is pleasantly entertained.) This step could be skipped by expressing the presumption that people want to care about others the best they can, with their spare resources, just have not come across the latest materials on this topic.

Then, the conversation can go as: ok, so we are trying to do good here [presuming shared meaning but not explaining], excellent, well who donates to charities, ok, anyone has done a thorough competition/cooperation (since charities) research, ok, by what means, we are/some philanthropes are paying a lot of money to do this research to identify the most impactful ones, in terms of cost-effectiveness. You can view them here and here. Definitely recommended, these organizations are also cooperative, so if there is an opportunity to make greater impact with the donors' funding, they will go for it.

Wow, hehe, I have questions. [Note that the pitch distracts from the concept of care/feeling of responsibility by focusing on impact but does not request people to understand utilitarianism.]

I would not suggest pitching 'expanding the circle of compassion' upfront since that is not entertaining to people but seems like some work that they should otherwise not need to do so the persons may be reluctant to implement some EA principles.

Comment by brb243 on Pitching EA to someone who believes certain goods can't be assigned a value · 2022-01-13T20:36:34.432Z · EA · GW

Hm what about saying that ok then if one values different objectives then they can still try to do the most good with their spare resources, making some kind of a conditional or a weighted average in their mind (for example, one can think that if they enjoyed Van Gogh 'this much' they can then focus on family 'that much' and then make philanthropic investments, which can enable people to do the same 'emotions-based prioritization,' such as care for family's basic needs and enjoy aesthetics of family presentation or can enable communities to act in this way, such as see if non-human animals should receive some attention, if economics should be improved, and if time to enjoy relationships should be allocated - this can be better possible in less industrialized areas which may make more decisions by 'emotional consensus').

Comment by brb243 on Ends vs. Means: To what extent should we be willing to sacrifice nuance and epistemic humility when communicating with people outside EA? · 2022-01-13T20:13:14.484Z · EA · GW

| if we ultimately want people to act, how much should we prize accuracy vs. entertainment value?

Maybe being vague enough so that what we are presenting is accurate, or can be interpreted as such, while upon elaboration, a greater understanding of EA concepts can be developed. Being personally entertaining by the ability to create interesting or captivating dynamics. This can motivate people to internalize broad objectives and seek own evidence to develop perspectives/ personal strategies. There will be no inconsistency between ends and means.

Comment by brb243 on Should the Aravind Eye Care System be funded in lieu of The Fred Hollows Foundation? · 2022-01-13T00:32:45.219Z · EA · GW

Oh, that is interesting. I see that TLYCS moves a few million $/year, GW more than a hundred million.

Yes. It is interesting that TLYCS specifies a 10× greater cost-effectiveness than GW estimates in its updated report.

OK, thank you for the tip!

Comment by brb243 on Should Founders Pledge support only the non-conscious asset transfer arm of Bandhan’s Targeting the Hardcore Poor poverty graduation program? · 2022-01-12T17:57:53.751Z · EA · GW

Ok the wording has been changed. This is also a semi-rhetorical question, something like, wouldn't it be better if animals weren't factory farmed by humans but rather taken considerate motivated care of in order to exchange pleasant cooperation on meaningful objectives?  These can sound a bit weird if they are presented in a way that does not compel people to empathize but rather get data in a concise manner to make further progress? Am I too influenced by the outside-of-EA world?

Yes, it makes sense. Maybe some people prefer livestock, just like many GD beneficiaries, because it provides a continuous source of income (such as from milk) and also can be sold in cases of emergencies. Still, assuming that there are enough persons who would benefit from the non-livestock transfer option (while those who would rather or more feasibly receive an animal asset would be left without funding), supporting only the non-conscious asset beneficiaries can set an important institutional norm of human economic growth not at the cost of other individuals' suffering?

Comment by brb243 on Should the Aravind Eye Care System be funded in lieu of The Fred Hollows Foundation? · 2022-01-12T17:39:27.829Z · EA · GW

Ok, noted. But then if people just want to skim the post in seconds (especially those who may not be so interested in the first place) do you think maybe headings or infographics would be more appropriate? What would you recommend?

To the Fred Hollows Foundation? Will they not assert that they first do not operate in India and their services are needed where they operate, plus they are not an investing company? This is why funders are a better audience to decide? Especially considering the innovativeness of EAs? Nevertheless noted.

To answer:

Extent of FHF support from EA: FHF is a TLYCS recommended charity. The cost of treating a cataract surgery versus that of training a guide dog for blind people is a commonly cited example in EA. The revenue of FHF was $650m in 2019. I am not finding any funding from Open Philanthropy/Good Ventures but I think in EA Brazil they selected a cataract organization as one of the 3 non-GW ones and in the Philippines they seem to have also tried to find charities that do similar work to GW organizations, and include The Fred Hollows Foundation on their list. Other local effective donation organizations may include FHF. So, I would suggest that people are at least thinking, if not donating. 

Cost-effectiveness analyses: From the TLYCS page, which cites a World Bank and a GiveWell resource, the cost of one surgery is $100 and prevents 1–30 years of blindness and, in addition, 1–30 years of low vision. Based on GW's analysis, the disability weight of blindness is 0.2. IHME cites 0.187 (range 0.124–0.26) for blindness (e. g. row 209 (original resource)) and 0.031 for moderate vision impairment (e. g. row 207). Assuming 15.5 years of blindness and of low vision prevented, that is 15.5*(0.187+0.031)=3.379 DALYs averted per $100. Assuming a life of 70 years, 100/3.379*70=$2,072/statistical life saved. That is competitive with AMF, which cites about $3,500 per statistical life saved but not with e. g. DMI which should avert malaria for $600. This is neglecting that some people may experience negative quality life (below death) so the assumption of full life quality (1.00) does not make sense – if life is saved, then that can be adding suffering (but if blindness is averted, that may reduce suffering).

Why has Aravind not expanded (uncertain): Yes, perhaps the scalability has issues or it is just the limited motivation to expand, also given the possible preference for keeping respect to a founder by not being too competitive, also due to the developing country and personal mission-driven company context. Limited access to credit or low efficiency and professionalism of local financial markets may be another expansion barrier. According to The Fortune at the Bottom of the Pyramid (inspiration for this post), the organization seemed organized regarding training doctors and delegating work to differently skilled personnel to keep costs low. But that is almost 20 years ago, so maybe competitors implemented similar approach or the real cost of providing a free cataract surgery to a patient increased due to a decreasing cost of other treatments.

Comment by brb243 on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2021-10-09T22:22:29.800Z · EA · GW

Hello Ben,

I have a question about your cost competitiveness and scale, a hypothesis about your counterfactual impact, and additional questions.

Are the fees lower than those of other providers, such as MoneyGram? The last time I checked, the total for a $100 transfer was lower for MoneyGram. The difference should be greater with larger transfers since Wave charges a percentage whereas MoneyGram a flat fee.

I may be mistaken, and it may depend on the country, but I never saw Wave in Kenya. I saw Western Union and MoneyGram frequently at banks and elsewhere. I also saw M-Pesa and Safaricom agents at every corner.

Your impact competitive advantage can be the coverage of otherwise unbanked clients. This seems to me based on the TechCrunch+ article (francophone market). However, I am not sure about your unbanked (e. g. rural) vs. transfer density (e. g. outcompeting providers in cities) decisionmaking.

Even if it is unwise to seek to make money by significantly serving remote locations, an important argument for investing into Wave as opposed to other payment providers is your funding of non-profit development specialists.

In addition, your marketing seems to be gender inclusive and focused on helping people send money for a good price. This can be compared to the marketing of your competitor Paga, which may seem to perpetuate prospects of reflecting power structures that do not aim for inclusive advancement. Assuming positive personal intentions may prevent conflict at places where such is based in suboptimal/rejecting/aggressive interpersonal relationships. This speaks further for investing into Wave.

Is any of this accurate?

Also, what is your perspective on fiduciary duty? Can shareholders agree to maximize metrics other than profit (e. g. social benefit)?

Also, what do you think of 100% for-good investments? People are 'hired' to work on a venture (e. g. business ideas in East Africa) and advance normative change to resolve locally identified fundamental problems (in addition to their primary focus). For example, an agricultural products processing business can also remind people to buy nets, take preventive healthcare measures, keep learning to innovate local efficiencies and gain global job market skills, consider animal welfare, address and report bullies, and invest profits wisely.

Is it accurate that for most ventures in emerging markets, the Founders Pledge report does not hold true: it is that if a particular investor does not fund an SME (e. g. <$10,000), the venture is not advanced, since the innovators are not trained in Silicon Valley/'Western' pitching and have limited connections to affluent persons?/It is always possible to find counterfactual opportunities?

What do you think of working with governments on proposed investments adding conditions that should extend to entire industries, for example, safety standards in mining (p. 19) or shifting the national dynamic comparative advantage from potentially negative externality industries, such as a slaugterhouse for export (p. 39), to neutral or positive ones, such as a fruit factory (p. 41)?

Reading this report, you may be also interested in Micropay (U) Ltd. (p. 79) and the Agrikatale Mobile App (p. 82).

Comment by brb243 on Major UN report discusses existential risk and future generations (summary) · 2021-10-09T15:05:32.955Z · EA · GW

(Sidenote) Is the "inflection point in history" (p. 3) both from convex to concave and vice versa since humanitarian and environmental progress can be measured in addition to GDP (p. 4)?

Informing the "Emergency Platform to respond to complex global crises" (p. 65, para. 101) could be valuable. What are the requests? Develop skills to respond to crises that are impossible to prevent since funding by a multilateral system can be agreed upon?

What are the investments into "resilience and prevention" (p. 55, para. 77) constrained by? Is this decisionmaking of large players, such as G20 governments? Is there a way to inform such, for example by tech giants' (Google, Facebook) advocacy? How would one best make sure that such resilience includes all across geographies and roles? Is this a matter of compiling data on past emergencies and viable responses and pre-negotiating cooperation?

From the report it seems to me that a complex global crisis is already occurring. Concretizing this to what I am familiar with, 1) multinational corporations fail to consider labor and environmental standards alongside value chains beyond potentially relatively unimpactful PR measures, 2) media fail to consider users in addition to advertisers, 3) conflict zone actors understand power by economic not development metrics scores, and 4) human health is lacking the One Health benefits.

Based on my thinking, a potentially valuable addition could be 1) a PR organization that would score companies based on their GVC policies/counterfactual impact, 2) regulation of the media market based on accuracy and users' freedom (e. g. not addicted), 3) conflict zone structures' assistance in greater and better power over all, 4) One Health normalization across signatories.

Together, this should reduce stakeholders' willingness and ability to act without solidarity and effectively address issues that may jeopardize a working multilateral system.

With any requests, feel free to contact the new organization High Impact Professionals that gathers professionals willing to work on high-impact projects pro bono or submit your requests to the EA Impact CoLabs.

Comment by brb243 on Digital People Would Be An Even Bigger Deal · 2021-08-30T18:50:22.720Z · EA · GW

How is sentience going to be approached in this model? Would some entities that would otherwise experience suffering be coded as not conscious for some periods of time?

Comment by brb243 on What are some historical examples of people and organizations who've influenced people to do more good? · 2021-03-25T16:41:14.164Z · EA · GW

Is this a random yet captivating and intellectually sounding text automatic generator test?

Comment by brb243 on Effective Giving Advocacy Challenge · 2020-10-29T13:29:45.432Z · EA · GW

For sure, if you want to get a bit serious about this, please join the EA Lobbying Discussion next Thursday (November 5, 2020) at 5pm UK time at We should have an overview of negotiation basics and policy advocacy insights from two government officials. Maximize your leverage by institutionalizing a 'giving' policy.

Comment by brb243 on Data Analysis Involvement Opportunity (~10 hours) · 2020-10-15T19:27:17.716Z · EA · GW

Hi! This one should be taken care of. I can let you know about other opps.

Comment by brb243 on Effective Altruism is a Question (not an ideology) · 2020-10-11T20:37:21.989Z · EA · GW

But I am an Effective Altruist, Helen -- 😓 By definition, as a member of the Effective Altruism community, no matter what I do or what I ask. Of course, engaging with the movement, I am more likely to expand my moral circles, gain knowledge and motivation to do great, et cetera, just like the other Effective Altruists. Held accountable by the institution itself, I enter, remain, or exit freely.

Comment by brb243 on A counterfactual QALY for USD 2.60–28.94? · 2020-10-08T20:57:13.317Z · EA · GW


I found the dataset that I thought I saw before: the Institute for Health Metrics and Evaluation (IHME) Global Burden of Disease Study 2017 (GBD 2017) Disability Weights. Disability weights are the changes of Health-related Quality of Life (HRQoL) due to a condition. I re-ran the calculations and found the cost-effectiveness of the mobile clinics project as 26.63 USD/QALY, with a low estimate of 184.14 USD/QALY and high estimate of 6.33 USD/QALY. I used the same data to estimate the cost-effectiveness of AMF and found 56.07 USD/QALY (low 112.14 and high 11.21). The Business Insider AMF number is about 49.76 USD/QALY. Thus, these updated calculations may be more accurate. Still, the calculations do not take into account the preventive care outcomes, deaths averted due to the Ebola outbreak response, and economic benefits (e. g. of deworming) that may lead to further health improvements, leave alone the positive long-term virtuous cycle of improved health and wealth - but that may apply to other health-related programs too.

Comment by brb243 on Data Analysis Involvement Opportunity (~10 hours) · 2020-09-30T20:36:25.725Z · EA · GW

OK. I can see it on and download it from Google Drive. Perhaps this link:


I can also send via e-mail, feel free to message.

Comment by brb243 on Guidelines on depicting poverty · 2020-09-21T17:13:59.682Z · EA · GW

When one is portraying reality accurately (people living on standards far below those of advanced economies), there may seem to be no problem (people living peacefully on the fields or in slums, disabled people asking for funds, sick persons resting at home). It is just the reality; these people are just a part of the picture. They are accepted by the society, although perhaps not as much catered to.

I am actually thinking that both portraying someone's negative emotional appeal (that does not allow the addressee to reject donating to the acceptance and end of relationship of the appealer) and portraying an opportunity to make a great impact put the intended beneficiaries into a subordinate position. The latter only necessitates different emotional work of the portrayed - exhibiting joy and proudly grateful performance as opposed to hatred and feeling of injustice. Since those in relative power may wish to feel that they are appreciated/loved by independent persons, the latter may be a better 'customer care' for the donors.

The best case would be perhaps sincere reality, with all its benefits (e. g. good relationships) and economic donation opportunities. No emotions directed to the audience. This would also enable donors to make independent decisions, doing good for absolutely nothing in return.

Now the question is how effective the portrayal of just people living somewhere be in soliciting donations. I hope that highly, because providing unconditional love is what makes people feel truly well.

Comment by brb243 on Denise_Melchin's Shortform · 2020-09-19T17:28:28.811Z · EA · GW

I think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of 'doing the most good'- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit

Comment by brb243 on A counterfactual QALY for USD 2.60–28.94? · 2020-09-15T18:30:12.365Z · EA · GW

Hello. I apologize for the late reply. I was moving over the weekend. I am looking at the IHME DALY by cause data (my calculations here) but these do not seem to take into account the long-term effects of the diseases. For example, deworming and vitamin A supplementation may have positive long-term effects in terms of schooling and economic gains that may far outweigh the direct short-term QALY losses. From there the upper estimate of 5. Simple malaria I would presume one that does not require immediate medical attention but one that still may result in severe condition if untreated (CDC). For the life-threatening conditions, my rationale was also that children treated with severe acute malnutrition are younger than average-age patients and that persons who survive 5 years live on average longer than life expectancy.

Also, the QALY estimates are not taking into account the effects of preventive measures - e. g. almost 90,000 persons informed on STIs and the response to a cholera outbreak (training and material provided) - before the intervention, 5 persons died, after no other deaths occurred.

On that note, I would actually appreciate if anyone could provide more credible estimates, taking into account the effectiveness and long-term consequences of the treatment. I am sure that REO would welcome such cooperation, also for capacity building reasons.