Always remember that impact is achieved through direct work...
Even in emerging economies, impact needs funding. (Effective) donations are not mentioned in the post. However, they should be quite central, because of
1) Solidarity: Even little privileged people in EA in LMICs should keep solidarity with large donors: everyone is giving up some 'next level' comforts, compared to their norm. Whether that is the smaller Tesla or walking for the hour every day.
That personal commitment can make the community a yet more honorable place to be a part of.
2) Impact: Not only "[s]mall donors can sometimes beat large donors in terms of cost-effectiveness," for example by identifying the 1 in 10,000 children who would have died from malaria in a community with or without nets and buying them the $4 treatment, but also they can show/test paths for more cost-effective donations.
This will make dialogues with large donors very fruitful, as both parties will be bringing their very significant comparative advantages.
3) Change leverage: People who are invested in finding yet better ways of caring for others whose issues they connect with should enjoy greater community approval than those who waited for instructions and received funding to advance others' solutions.
People who could be supported in scaling up programs will be the ones who sincerely care. This is necessary for a change to happen.
4) Solutions pressure: For many relatively privileged people in LMICs, it can be common to support many others. For example, it is possible to meet even 5 begging children trying to gain attention every day and donate to some. If one is spending others' funding, they may seek to just gain the $1,000 GiveDirectly transfer for each of them, which is unrealistic given the scale of poverty.
If one is spending their own funds, they may think twice about a sustainable yet affordable program that would make a decisive impact for the children.
We think it is unlikely that new EAs in LMICs will find comparable charities to existing GiveWell’s recommended charities, particularly in middle income countries. Existing charity evaluators are probably better suited to do this work.
On the other hand, engaging in some charity evaluation efforts can be formative for some EAs to help them internalize cost effectiveness evaluation and prioritization.
The post suggests to start with values and methodologies used by prominent Western institutions and conduct evaluations of local situations only after these values are internalized.
This can lead to value imposition.
Rather, one can start with local values or value systems and develop/refine/discuss methodologies for their measurement. This can enrich the discourse on the meaning(s) of 'good.'
Some resources on values presented by local scholars and their measurements include this paper on measuring Ubuntu, this "Buddhist perspective on measuring wellbeing and happiness in sustainable development," and this page on broad values in Hinduism.
The key can be to discern which values are truly held by the people vs. presented by a scholar but not held as well as which are internalized based on own decisions vs. based on conformation to a previous or an external standard.
I interpret, here, small and large donor as an average-income person in a LIC and a HIC.
I am imagining a person who had only $4 to donate in a month and someone who had $4,000 speaking about effective ways of saving lives. I am not stating a LMICs vs. HICs dichotomy.
based on the presumed origin of the frameworks in the post and the resources sheet
People in different contexts in LMICs (and HICs) can be better informed on various quality values-measurements resources.
- Possible loss of the unique prospect to make the world critically thinking and cooperative (extremely high WALY): FTX uniquely uses marketing that motivates critical thinking and cooperation, while Binance (just like almost any other company) uses fear, shame, perception of deprivation, and other negative emotions to attract and keep customers. Assuming the global expansion of the metaverse, whether people are enjoying cooperation and thought processes versus assume an aggressive/hateful environment which they have to pay attention to makes a decisive difference in the global quality of life.
- Con: Uncertainty in FTX marketing success: It is uncertain whether FTX would have successfully scaled up this marketing and norms. Possibly, if a prospective trader/NFT collector sees a Binance ad that uses almost subliminal techniques to motivate the impulse to participate (e. g. subconsciously gaining the power to abuse while protecting oneself) and after sees an FTX ad that shows a complex critique on the initial skepticism around well-known innovations, they may just use Binance because, without critical thinking, it is the more powerful/threatening actor.
- Con: FTX cannot oversee a decentralized ecosystem: Decentralized ecosystem does not allow for product standardization. Since hundreds of new products emerge, FTX cannot effectively oversee most marketing and product development.
- Con: Meta to an extent optimizes for attention so would likely use normal marketing. Since Meta largely optimizes for attention, it is likely that if its stakeholders acquire FTX, the marketing would become normal, similar to that of Binance.
- Con: FTX.us is unaffected. The marketing that I was referring to was used in the US. FTX.us is unaffected by the purchase. Thus, it can be argued that the sale does not affect FTX (US) marketing substantially.
- Counterfactual investment opportunities for Dustin Moskovitz and SBF: With the sale to Binance, both Dustin Moskovitz and SBF will be able to invest into other ventures. These ventures can be more profitable and/or impactful than FTX. Thus, 'Profit for Good' could be maximized.
- CZ is in it to donate to charity: According to this video (which is similar to the one with SBF), CZ "plans to donate his wealth to charity." Thus, this development can be truly seen as 'competitive cooperation' rather than 'taking the money to buy yachts.'
This would suggest that Dustin Moskovitz should not buy FTX. However, that is only a guess.
While I agree that FTX.com has more than enough experience negotiating deals objectively, I also think that this decision considers the fear that CZ is creating.
This is because as long as FTT gains value after Binance's sell (due to speculation), then there is no need to agree to the deal. Whether FTT gains value is influenced by investor sentiments.
The deal with Binance shows that SBF does not expect FTT to appreciate after Binance's sell. This would be the case when fear is associated with FTT. This is what CZ is creating.
Based on this line of reasoning, it is not necessary to agree to the deal with Binance, if one can mitigate the fear being caused by CZ.
Market price manipulation is illegal, so, technically, CZ cannot do anything besides influencing investor sentiments. One can argue that mitigating CZ's ability to threaten can be the key here, because that is the only effective strategy to keep FTT value high.
One way to mitigate one's ability to threaten is disclosing their techniques, such as deliberate motivation of negative emotions by appeal to biases, possibly using Twitter bots, etc.
On one hand, ignoring Binance's offer had to be already thoroughly considered by FTX.com. On the other hand, introducing an external motivation to find a solution by 'making CZ sincerely contribute' or ignore him could improve the sentiments around FTT value and thus resolve the problem.
Is it a good idea to communicate to Sam that CZ is emotionally manipulating him and that he could be making a suboptimal decision by selling for low cost?
"Losing control" implies something bad has happened in addition to the loss of value of FTX. I'm not sure what else that is.
(also commenting on the sale to Binance rather than deliberation with several potential buyers mentioned by Lukas_Gloor)
I happened to be learning full-time about FTX and its broader ecosystem for the past month or two. (ah, hah, I thought maybe next week I can apply)
CZ is a great diplomat. It can be argued that Binance runs on fear, abuse, and limiting the motivation to leave. (This is juxtaposed with FTX model, which is powered by consideration and support.)
In his announcement to sell FTT, CZ (or the team tweeting as CZ), used emotionally challenging language as well as alluded to social biases. This could have motivated SBF to act impulsively, as if to avoid the prospect of prolonged 'emotional terror' of the perception of wrongdoing, uncertainty, powerlessness, etc.
We will try to do so in a way that minimizes market impact. Due to market conditions and limited liquidity, we expect this will take a few months to complete. 2/4 [Tweet by CZ]
In context, one can imagine CZ enjoying liquidating FTT bit by bit, for an unknown extended period of time (which may not end), which can seem dreadful to customers and SBF, considering the somewhat 'sadist' reputation of CZ. People would just seek to avoid pain (that CZ implies to threaten).
... Our industry is in it’s nascency and every time a project publicly fails it hurts every user and every platform. 3/4
This can be read as further appealing to Sam to prevent the 'hurting' of vulnerable users (and platforms) (and sell impulsively).
I was out with friends yesterday when the topic of whale alerts came up. Following our principles, I decided to be transparent. So I wrote a thread in 5 mins, and posted it. Little did I know it was going to be “the straw that broke the camel’s back.” 1/4 (Tweet by CZ)
This portrays effortlessness, that may be disempowering to SBF, who is admired for his fast-paced decisionmaking. 'was out with friends' can seek to inspire loneliness, 'whale alerts' can be considered fatphobic, and the part with the straw broke back can further allude to physical disempowerment and implied physical threat. Thus, SBF can be motivated to feel powerless compared to CZ.
The counterargument to the hypothesis that SBF acted impulsively due to CZ's threatening is that actually, the assets on FTX and Alameda had little value beyond that assigned to them by buyers. SBF can be thus collecting maximum value possible, greater than that which he would gain if further actors studied FTX/Alameda assets.
I am not sure about the valuation of FTX/Alameda. However, Binance is a very similar business. Thus, it can be that studying Binance can have similarly detrimental effects. I am uncertain about this, but it prima facie can seem that assessing the 'actual' value of Binance and estimating that of FTX based on that can provide decisive negotiation leverage to SBF.
One person who seems to be resistant to CZ's threats is Anatoly Yakovenko (for example, read Binance CEO CZ mused on this very subject on Twitter:. Anatoly could be helpful in negotiating with CZ, creating leverage by seeing through (and shaming) aggression and threats.
Greater variety of EA Newsletter emojis
This past EA newsletter used only two emojis, a down arrow (⬇️) and an anchor (⚓), while it talked about the AI Worldview Prize (🤖🧠🏆), asteroids (☄️), prize-winning criticisms of effective altruism (🏅❌:ea-bulb:), articles (📃), news(📰), and announcements (📢), among others.
This communicated the following message:
'You have to scroll down, where you have to pay attention (where the anchor is).'
'There is a lot of interesting content in this newsletter, someone paid attention to make it visually concise and fun for you. But, don't rely on (visual) oversimplifications, see for yourself.'
The former is more conducive to limited critical thinking, while the latter can stimulate it.
Further, the arrow-anchor setup can be understood as normalizing abuse (as the only viable option) (an arrow can symbolize a direction without the request for or agreement to such and an anchor can symbolize threat of the use of force and limited consideration, since it is a heavy sharp object not related to the topics). The normalization of abuse could worsen epistemics within EA and limit the community's skills in cooperation on positive impact.
In general, viewers can pay the most attention to the portrayal of threats, even if that is not apparent or they are not consciously aware of it. Under threat/stressed, viewers may be more likely to click on content, seeking to resolve the negative feeling that compels them to action.
Another reason why viewers may be paying attention to content that can be interpreted as abusive but where that is not prima facie apparent is that they seek assurance in the positive intent of/ability to trust the resource (or advertisement). For example, if one feels that an ad is threatening abuse but the text is positive, they can be more likely to read it, to confirm positive intent/seek trust.
These attention captivation techniques motivate impulsive/intuitive decisionmaking (based on chemical/hormonal processes?) and limit reasoning and deep thinking. These techniques can also motivate impulsive sharing of content, because it evolutionarily makes sense to share threats first and because people seek to affirm positive intent when they share the resource with others who will likely not describe the possible abuse.
According to this theory, using setups that can be interpreted as threatening but not at first apparently is the most effective way of growing the EA community.
However, it can be also that the newsletter audience more likely engages with and shares content that is conducive to reasoning and deep thinking.
For instance, the High Impact Professionals newsletter uses descriptive emojis and the organization is popular in EA.
While conducting an RCT on the variety of emojis and readership/click-through rate/thoughtfulness of a response requested by the newsletter can be a bit too much, it is one way to test the hypothesis.
Let me actually also illustrate what I mean on the example of the image used in this post. The image can cause distress but that is not at first apparent.
The image has feminine symbolism, the flowers and possibly the light. The viewer has not requested or agreed to view this symbolism but viewed it (these are prominent). Highlighted is also the figure's chest. These two aspects can engage the viewer, who may be compelled to pay further attention.
The leaves on the left side of the image resemble reptiles and birds hiding with the possibility of attack. That can cause cognitive dissonance, because birds and reptiles are considered likely (due to evolution and media) to attack than mammal predators by humans. The leaves near the flower in the bottom left corner resemble a bird with its beak directed toward the figure (who does not pay attention to it). The reader can be compelled to look at the leaves to assess for any threat and freeze in the anticipation of/to prevent the bird's action.
Some of the figure's fingers can be considered as disfigured. From the perspective of the viewer, the second to the left finger on the figure's hand near the flower is bent and the thumb on the same hand elongated. The other hand is the one that would 'confirm' that there is nothing weird. The hand looks relatively normal, except for the swollen second finger from the top (that also can make one think of literal or metaphorical rotting) and the thumb with the small red pointy end.
That thumb can be considered as a 'hidden weapon' of the feminine figure. That can make people think of betrayal by those who are traditionally trusted (females). Another form of betrayal/weapon can be the left flower, which is 'going' from the side in the general direction of the viewer, like a snake with an open mouth. The viewer may be compelled to look at it, to make sure that it does not go at them. If you zoom in on the inside of the flower (the violet, purple, yellow, and red shapes), further attention captivation can be analyzed.
A viewer of this image can become aware of their body and consider it vulnerable. That is because of the bent back of the figure but prominent/highlighted chest. The figure's right side of the chest is the 'assurance' of limited prominence, while the left side portrays significant prominence. (This could be vice versa but that perception can be limited.) This is gender neutral, although the shape can allude to male body fat, which is portrayed as something which should be covered, due to vulnerability (often used in advertisement).
The figure looks like an authority which is practically impossible to be convinced by reason and must be obeyed, by the facial expression. One may regret engaging with this environment but can be more compelled to 'stay' since seems pointless to 'argue against.'
The vertical blue stripe on the right side of the image, which coincides with the figure's sleeve, can be interpreted as AI threat. It is like the flickering of the screen. The figure embodies the 'appropriate' reaction to this, which is to do nothing and advance the norms that one cannot argue against.
There are other things that I could and could not analyze.
Of course, one can disagree and simply say that it is a normal image of a lady.
However, I suggest that one stares at the image in peace for a few minutes and observes their emotions and impulses (including motions and intended motions). If the above can be leading, a different DALL-E or prominent advertisement image can be used. One can feel negative emotions/negatively about an environment and physical sensations (such as finger twisting). That is a good reason to understand these techniques rationally but not emotionally and avoid long emotionally focusing on state-of-art AI images (but look e. g. on groups of fashion models where techniques relate mostly to gender norms, body image judgment, and racial stereotypes).
If one is quite aware of these techniques, considered using various alternatives in the newsletter, and still choses the arrow-anchor framework, then they have the reasoning for it. However, if one is simply influenced by AI and unknowingly advances an abusive spirit, possible impact of the newsletter should be related to its intended objectives and alternatives considered.
It can also be argued that an arrow and an anchor is nothing like a complex advertisement but that powerful people may like a form of traditional power, while their intents are good. I watched interviews with the 100 top Forbes billionaires and while many enjoy traditional exhibits of power and their intents are good, perhaps only four would actually enjoy abusive newsletter marketing, of which two would not understand it as anything that should be felt or suboptimal for anyone, and one would not seek to advance the abuse further. Two seems vulnerable to being influenced by this marketing, if they happen to be subscribing, which is very unlikely for one and possible but not very likely for the other.
I have also listened to podcasts with prominent EA funders and while impactful work can be a must, abuse is not (rather, positive relationships and impact is). So, using abusive newsletter emoji marketing is unlikely to please EA funders but can motivate them to repeat this 'tone from the top.'
In conclusion, the EA newsletter emojis can be reviewed.
Thank you. This actually makes a lot of sense. The farming improvements (although could be different in different areas and studies) are astounding. For example, One Acre Fund increases farmers' annual income by about $100 or 50%, for the cost of about $25/farmer in 2021. Bednets have an equivalent nominal impact for about a fifth ($5) of the price.
Sidenote: the lower % improvement suggests that AMF serves relatively affluent farmers (with average annual incomes of $633 ($76/12%*100%), which can have twice to five times the real value) (unless the $76 is real value).
The agricultural productivity can increase because people are less sick and more productive. Also people could have a greater capacity to seek better farming practice information, livestock could be less ill (if bednets are used to cover livestock), and fishers could have better equipment.
Also, children could be able to help with chores rather than occupy parents or older siblings to care for them. Reduced treatment spending can be also substantial. Assuming that malaria treatment costs $4 and a bednet prevents 2 cases of malaria per year, then a family with 5 children (who would be treated if they get malaria) can save $40/year, which can a substantial proportion of their income.
In terms of attendance, bednets can have limited effects (about an additional week of school per year?).
In Kenya, primary school students were considered to miss 11% of the school year (20 school days missed per child per year) due to malaria, while in Nigeria the figure varied between 2% and 6% of the school year (3 to 12 days per year per student). Kimbi et al. (2005) estimated that in the Muea area in Cameroon, 53 out of 144 (36.8%) malaria-infected children lose 0.5 to 14 days of school (averaging 1.53 schooldays). (Thuilliez, 2009)
That is about 10 days/year. If a bednet prevents half of the cases, that is 5 days or a week.
The impacts on enrollment can be relatively larger due to the increased farming income and reduced treatment cost if education expenses are substantial. For example, if education costs $100/year, then an additional child can be educated. If education expenses are close to zero, then malaria does not affect enrollment.
The quality of education or its relevance to employment is not directly addressed but can be addressed indirectly by enrolling a child in a better (higher paid) school.
Reducing mortality can have positive impact on savings and investments due to the reduction of funeral costs, which can constitute a large proportion of a family's annual income.
I am not familiar with the research on long-term health improvements. I imagine that early treatment of cases that would be more severe, especially for young children, is a key factor. Prevention reduces the rate when this would be needed.
Ah hah hah, yes, it is "net-positive life" but perhaps not life quality. Let me show you some of these videos:
People in a slum, possible abuse and neglect in spousal relationships, FGM, FGM and family, some parents decide that their child cannot live, and sending family members for life-long shrine work.
These are just arbitrary examples that show abuse, neglect, and addiction, mostly from countries that AMF does not operate in. It is possible that similar situations exist in some areas of countries of AMF operation.
The argument that in these situations, people can feel worse than if they were dead.
On a positive note, there are also very chilled environments where lovers get married as well as officials who support consideration based on reasoning.
Although currently you do not consider life quality factors, you could use these factors to put pressure on governments to advance legislation and governance that prevents dissatisfied lives, such as by banning FGM, forced marriage, or ritual servitude.
Even if additional measures are needed to improve life quality, considering these factors can be a statement that AMF, a large player, communicates. Implementing an somewhat sophisticated metric (such as a weighted average with some exponents) can engage officials in calculating what legislation and agreements would net them the most nets (haha), rather than using blame or other negative motivation to achieve the same result.
Preferring life satisfaction (or its proxies) statistics and expert estimates can have positive effects on governance/institutional decisionmaking of AMF partner countries and regions, such as the development of government networks of people familiar with the concepts (and interested in the improvements) of life quality measures and the government's interest in quantifiable impact.
Not to bother you anymore, but if a government decides to give its 1 million nets to its worst slum and leaves the people who seem to have all they need (except maybe bednets) uncovered, that's actually equally great as vice versa, and better if malaria rates in the slum are 10% higher than those in the countryside, because more children will be able to survive and people will have more for daily spending. Right.
a) While in formal writing, there are specific formats of citing others' citations, in this context, I decided to link the report directly, alongside with this comment thread that reads
I added the 4.5 value from the 2019 World Happiness Report also cited by HLI.
In this comment, the HLI's Estimating moral weights page (with the footnote) to which I referred several times in this thread is not referenced, because I assumed that those who read this thread carefully are already familiar with the page and those who are quickly skimming do not need to be distracted by that link.
I am keeping in mind that this is the Change Our Mind contest. Citing HLI could be read as an intent to convince GiveWell to implement HLI's framework, which they are familiar with, by repetition. WHR allows readers to form and update their opinions based on data which does not intend to change GiveWell's mind. Thus, WHR can change the mind of an evidence-based decisionmaker better.
Further, historically, GiveWell has used top statistical evidence to make its recommendations. WHR enjoys similar level of comprehensiveness as RCT-based research, while HLI's research is more speculative. Thus, WHR can allow GiveWell to change their mind more consistently with its fundamental values than HLI's research.
b) I have not checked the Report, but rather deferred to HLI's standards of citing statistics. I reviewed some papers cited by HLI and did not find inconsistency (other than the vague sample size interpretation as further above in this thread). This can be understood as a form of a spot check.
Nevertheless, I searched for the statistic in the 2019 WHR. (I used the search function for "4.5" and "Kenya".) "Kenya (4.509)" is cited as the value on p. 29 of the WHR pdf (pp. 26–27 of the document). I added the page reference.
This actually leads me to the methodology of the WHR. It seems like 'happiness' is a function of (pp. 26–27):
- GDP per capita
- Social support
- Healthy life expectancy
- Freedom to make life choices
- Perceptions of corruption
Although this can cover many aspects of happiness, other factors which could influence this metric (including by changing its sign), such as the normality of abuse or parental acceptance/rejection, do not seem to be included. WHR 'happiness' can thus measure governance quality and public cooperation rather than seek to understand intended beneficiaries' quality of life. However, further research is needed.
I also added a note on the interpretation of this metric.
This will all else equal favor consumption and growth interventions over lifesaving measures (though of course there are many other considerations in place).
Yup, assuming causality.
[D]oubling consumption corresponds to a 0.42 increase in the life satisfaction score ... Our ‘wealthy’ households had an
average life satisfaction score of 4.3, while the ‘poor’ households had an average life satisfaction of 2.8. (p. 42) ... Stevenson and Wolfers (2013) finds a lower coefficient of 0.25 among lower income countries (p. 41)
I would be careful about simply increasing consumption and growth. More marketing (including that which highlights negative/abusive cultural aspects) could enter areas where identities are otherwise based in emotional navigation of relationships, which can be understood as deeply satisfying (these identities would be lost with increased societal attention paid to current globally competitive marketing).
Perhaps, this would start from an income level that would not be reached even with income doubled a few times, but, considering very affordable products, the Belt and Road Initiative, and growing marketing analysis and capacity in rapidly growing countries in Asia, growth without co-interventions can lead to an increased consumption of 'aggressively' marketed products, which may not increase one's life satisfaction.
This paper on cultural combination ('syncretism') from the South African University of Pretoria. There is little on the possibility of 'disturbing' pictures or arguably sexist bias-based and objectifying/physically judging advertisements becoming popular among some people. It is unlikely that the people affected by the marketing (even non-customers) would be interacting with humans of different cultures (but rather see the ads which do not respond to human emotional expressions).
People could be reporting an 'objective' life satisfaction, based on status portrayed in the ads, without emotional introspection. It is possible that they would not report dissatisfaction, because that would mean decreased competitiveness, which, based on some advertisements, could be associated with one's vulnerability or undesirable situation/identity. This is just a hypothesis.
Also, the lives of the poorer persons can be worse because of the norms that they grow up in (for example, threatening of neighbor's life for $3, sending children to work or beg from a very young age, defaulting on a group loan, ... vs. going to different neighbors for humble meals weekly, trying to put children through school, vetting microfinance firms and contemplating the EV of an income-generating asset lease).
The argument is that if you increase the (for instance) children's who grew up begging income, it does little for them because of their upbringing (it may be difficult for them to form enjoyable relationships because they are used to a lot of unwelcomingness). A better approach would be education in locally relevant skills so that they can be (considering the situation) welcome since a young age.
An alternative thinking is that the people who had limited opportunities when they were young would be super grateful for the improved opportunities and will educate their children so that they do not experience low life quality rather than approaching them as people would approach a begging child (illustrative example of gratitude of situation improvement - actually life saved - from an island I've seen). This suggests that the present adult generation should be targeted with consumption increase programs rather than children educated. Saving lives, at least by caring individuals sincerely interested in the saved people, can be actually also valued.
Still, at least some budget should probably be allocated to the "other considerations," just to make sure that it is not that, for example, men who beat their wives and women who would perpetuate the normalization of beating are not just going to get more colorful washing baskets with 'women overpowering men by using the product' for the women. I argued similarly here.
The 4.5 is footnote 30 in the HLI summary.
Detail possible inaccuracy:
IDinsight asked an SWB question in their beneficiary preferences survey; those surveyed in Kenya had an average life satisfaction score of 2.3/10 (n = 1,808, SD = 2.32 ).
While the total study sample size was 1,808 (which is also what the SD refers to), in Kenya 954 respondents were surveyed.
Based on this kind of observation, it seems to me that most people want to live. My personal, subjective, moral view is that it would be wrong to assign a different moral weight to their lives.
Let me challenge you here. Suppose that in a community inspired by Tsangano, Malawi, where people used 71% of nets which they freely received, the quality of life is -0.2 with an SD of 0.3 (normally distributed). 60 km away, in a place visually similar to Namisu, Malawi (where people used 95% of nets), the quality of life is 0.3 with an SD of 0.2. Each community has 2,000 people (who need about 1,000 nets). You have only 500 nets.
Who are you going to give the nets to?
Further challenge: You also have a pre-recorded radio show that improves farmers' agricultural productivity by coaching them to place only 1 grain 75 cm apart and cover with a few cm of soil rather than scattering the grain. This can increase people's productivity by an average of 20%. The airtime for the show in one community costs as much as 500 nets.
Are you going to forgo any nets and buy the show?
Are you subjectively assigning equivalent moral weights to the lives of the people in the two hypothetical communities?
I suspect that the key determinant of quality of life after attempting suicide is mental illness, especially depression, and not the suicide attempt itself. But I'm uncertain about this, and even more uncertain given both the literature and my clinical training are based on a high-income country context - things could be very different in low/middle-income countries or those in absolute poverty.
Thank you. I think so. I think that in high-income contexts, depression can relate to one's loneliness and use of social media that use negative-emotions marketing as well as abusive/neglecting/rejecting family relationships (that the media (and people influenced by them) can draw from (and make one to assume as reality)).
In many low-income contexts, it can be argued that people are not as lonely, because agreements are based on community accountability (which requires mutually enjoyable or overall approved emotional navigation) rather than sound rule of law and business relationships are founded in friendship (gaining customers for undifferentiated goods). Also, in low-income countries family can play a key role. Forced marriage, female and child abuse norms, FGM, limited family planning can all worsen one's mental health.
The key difference between high- and low-income countries can be that in high-income countries the negative perception of one's relationship-related situation and limited enjoyment of others is motivated by media, while in low-income countries perceived due to actual and 'necessary' abuse (e. g. someone has to be beaten to make bidis because otherwise productivity would not increase).
A related thought is that if (low-paid and unpaid) productive people in low-income contexts suicide, the productivity decreases, ceteris paribus.
An EA who studies India's media commented that the show of suicide in the TV is banned, because it increases suicide rates.
My small-sample study shows that some people can perceive their life quality below death, wish to live 0 additional years, and still live. I did not research suicide but the local enumerators, an elder, and an educator have not commented on it.
It can be hypothesized that the willingness to suicide is a part of a 'dialogue' between the 'abused' and the 'abuser,' used as a means to argue for more favorable treatment. It can be a statement that it is unacceptable to, for example, beat people for no perceived reason. Related concepts are described in The Wretched of the Earth by the psychiatrist Frantz Fanon.
The ability to suicide can increase people's willingness to 'lead this dialogue,' which would otherwise be unthinkable, and thus (at least 'during the discussion') lower their quality of life. It can be assumed that this will have limited benefits, since external education and investment rather than internal redelegation of tasks is needed to highlight enjoyable cultural approaches and enable productivity without (human) abuse.
This would suggest that limiting the use of highly highly hazardous pesticides can improve the mental health of people (there is no need to feel emotions that intend to lead to the improvement of their situation when they can themselves very little about it). However, it can also be argued that once people know about suicide, but are prevented from it, their mental health decreases even more significantly, because they are perceiving the 'trap' of having to live in an abusive situation without the ability to affect this for themselves or future generations.
I am actually not describing depression as you may be understanding it: "persistent feeling of sadness and loss of interest," which can occur when (I am not medically trained and am only suggesting ideas rather than intending to describe a medical condition) people feel uncompetitive/without the ability to become competitive, not needed/without unique skills (not considering individuals), or not bought in on the meaningfulness of hobbies/without developed interests.
I am describing 'depression' that is based in one's knowledge of being abused due to one's identity and inability to do anything about it, having urgent (family) issues that no close ones help with and one cannot resolve (for example, my research suggests that people would give up, on average 78% of their remaining life if 'people around them cared about each other's problems' - but in context, people would give up large fractions of their life even for nutritious food, insurance, etc), cultural limited presence of/training in love, and limited prospects for improvement of one's family situation.
Perhaps, the anecdotes on the CPSP website can be understood as 'weird' by people around the 'story tellers.' Most people understand the situation and just go with it. Suicide causes issues to the family.
Thus, the "assumption that people who attempted suicide would lead negative lives" should hold, if one looks at the situation from the perspective of one in the situation who assumes that their emotions can lead to a change or authority/peer understanding or from the perspective of someone not 'at peace' with the situation. This assumption would not hold if people are at peace with their roles/situations and depression is defined as the limited need to emotionally negotiate relationships.
I emphasize that I just wrote some ideas, which can be not indicative of anyone's perceptions, based on my limited understanding of the intended beneficiaries and non-beneficiaries as well as understanding of some resources. Persons and their attitudes are individual. When I hypothesize a commonality, it can not hold true, can apply only to some, be taken out of context, and have other interpretations.
- I looked at the graph on page 42 (the bar for Kenya is 2.3), but actually had the statistic from HLI, which cites it. The 2.8 (p. 40) is the survey average. Good catch for HLI (and myself, unless I am further misreading), which can make it seem as if the Kenyan sample was 1,808 and SD=2.32.
- OK, I was actually unaware of that (clearly was skimming the HLI page for bias confirmation - or, rather, made a note of an alarming statistic when skimming). I added the 4.5 value from the 2019 World Happiness Report also cited by HLI. This averages closer to 3.9/10, which is the LS estimate for GiveDirectly beneficiaries.
TLDR: Sure, the 30% seems quite high, although if the price of alternative fertilizer is around double, it could be accurate for many subsistence farmers.
I have the 30% from this cited text and the BOTEC. In the sheet, 30% seems to be subtracted from the overall cost-effectiveness that considers qualitative adjustments (E77 in "Calculations"). "Calculations" E58 specifies 70% adjustment due to -30% due to risk of agricultural harm ("Assumptions" E36). This 70% multiplies other qualitative adjustments (E60), which multiply the cost-effectiveness before qualitative adjustments (E76) to get cost-effectiveness after adjustments (E77).
The number does seem high, though, especially considering that substitutes seem available. However, it may also be accurate, if farmers are able to afford less fertilizer due to its higher price. One Acre Fund (OAF) RCT-based analysis cites about 50% improvement in yield (in a different region) when farmers are given a loan to purchase (and trained to use) fertilizer and improved seed variety (fertilizer:seed cost is about 2:1). Based on anecdotes from The Last Hunger Season, some farmers cannot afford fertilizer.
The price difference between the highly hazardous pesticides and alternatives is not stated, although pesticides constitute only 7.5% of input costs. However, the document (pp. A-12 - A-13 or 58-59 in the pdf) cited by GiveWell that gathers statistics on farm inputs considers relatively high costs for farm labor and land rent which in the case of subsistence farmers can be neglected (thus the cost would be much higher than 7.5%). There is also very high variance among states in India. Some states seem to use much less fertilizer (e. g. 2.5% of seed costs in Mizoram) than others (39% of seed costs in Andhra Pradesh). Thus, it is unclear to what extent any increases in fertilizer price affect yield.
Further, GiveWell cites that
[p]esticides commonly used for suicide may be more convenient, or have a different mechanism of application, in which case agricultural workers will incur some costs in learning how to use replacements.
Farmers in "The Last Hunger Season" were not trained in fertilizer use prior to the OAF program. It can be that farmers who pay attention to using fertilizer correctly will do so even if another type is offered and vice versa. India's growing network of rural e-centers with agricultural information can provide appropriate fertilizer information. In other countries of CPSP operations, farmers may be less informed. Thus, any decrease in agricultural productivity due to unfamiliar fertilizer use can be limited.
A professor conducted research on the substantiation of sentiments on counterfeiting. It could be possible that when a new type is introduced, farmers will be suspicious. This can be temporary or have limited effect (trust in local retailer not brand).
(More costly) fertilizer can also substitute other items that increase life quality, such as food, education, or health. Thus, even if a higher cost does not lower yields, the -30% (or other) adjustment could still be valid due to the effects of counterfactual spending.
I understand that GiveWell is assuming a 0.3 agricultural productivity decrease high estimate and 0 or 0.01 low estimate. The high estimate is used, while numbers with 0 decrease are cited next to the adjusted ones, possibly due to high uncertainty about the complex effects on agriculture.
So far, I only considered the effects on smallholders. Effects on industrial farms may be much more substantial, even if the price difference is in the order of percent. I assume that in India, most farms are subsistence. That should be 85% (by land holdings?) in Uttar Pradesh. I further assume that industrial productivity is about 5-10x that of subsistence farm (about 1/2-1/3 of land can be used in subsistence compared to commercial and productivity can be about 2-3x lower). This would suggest that commercial farms produce about as much (Fermi estimate) as subsistence farms (15%*5=75%≈85% or (15%*10=150%≈1.8*85%).
In areas where subsistence farmers use little chemical fertilizer, productivity decrease can be negligible (and much lower than that in commercial agriculture). Conversely, in regions where smallholders spend significant proportions on fertilizer, they can be affected disproportionately more than industries. The former suggests that the median would be close to 0 and mean would be the average of the commercial effects and 0 (e. g. 2% if commercial outputs fall by 4%). The latter can suggest a median value of >30% and mean value of the half of that.
The median would be 30% and mean around 0 if few farmers constitute a large majority of output and are relatively unaffected, while the majority of smallholders are affected significantly. This is what makes intuitive sense, upon the assumption that industrial agriculture largely outperforms subsistence in output and can flexibly (with negligible per unit cost) switch to alternative (or is already using it). However, this can be a biased perspective based on the knowledge of US and other developed economies' agriculture. While the rapidly industrializing India is the largest nation among CPSP partners, other beneficiary countries can be less industrialized.
Secondary effects from forgone commercial agriculture taxation (as well as any decreases in International competitiveness of beneficiary nations) that can support large proportions subsistence farmers could be discussed.
Lower fertilizer use could lead to higher rents accrued to farmers, if their product is sold as organic with a premium.
Another consideration is that CPSP on its previous website cited investigating the possible negative effects on agricultural productivity in Sri Lanka (listing this on the website can suggest a significant concern). This can be considered in conjunction with GiveWell's cited enthusiasm and great fit of the professor who leads the project/applied for the grant (he could be motivated to gather and interpret evidence in a way that highlights benefits and unhighlights risks).
The effects of highly hazardous pesticides on agricultural productivity (and the impact on populations) will depend on the
- Price and effectiveness differences between the currently and newly used fertilizer for smallholders and commercial farmers
- Willingness and ability of subsistence farmers to learn any new fertilizer use techniques
- Ratio of subsistence and commercial farms
- Use of commercial agriculture taxation on smallholder productivity
- Price premium for no chemical fertilizer use
- Agricultural productivity units (tons, monetary value, % of farmers not experiencing hunger, ...)
Guessing these values, measuring productivity in real local currency units and considering effects only on smallholders, based on the above discussion, the decrease can have a mean of 0.04 with SD=0.02 and be normally distributed, with possible other distributions based on country or region.
One Acre Fund provides $75-80 loans for fertilizer and seeds. 10kg of improved corn seeds costs 70,000 UGX. 10-15kg is needed for an acre (used 100,000 UGX or about $25). Based on the book and confirmed by Global Partnerships, the average farm size is about one acre. $25/$75=1/3, so about 1:2.
OK, for now I disagree but the time when I agree can come within a few years.
I think this applies in settings where people know how to spend money to maximize utility and enjoy (money independent) good relationships and people-centered systems.
Let me argue that normative environment matters more than money. People in absolute poverty can be doing great, if they are safe, know that they receive treatment if they need, many friends around them are quite cool, families are loving, and they have always something to learn which makes them better in some way.
Monetarily, this can be achieved with health insurance and maybe textbooks/various informational radio shows. Otherwise, it is the norms. Individuals cannot spend on normative development, because others need to progress with them.
For example, I stayed in a $60/month place and it was cool because of the engineering students' housemates' norms (helped me install my bednet, great convos on race and gender, mutual respect for personal space but enjoyment to greet), guard and good padlocks (we had thieves outside of the doors for a few hours one night but they did nothing because they did not have equipment to cut locked iron doors), malaria testing available about 50m away from the door and medicine for $4, and great work environment and caring colleagues.
I also stayed (just for a month) at a $200/month place, where the landlady was complaining in front of her two young daughters that contraception was not popular so she regrets, also gave me incorrect information to make me sign the (vague) lease. Also, apparently her cleaning lady stole her valuables when she was away. I also saw four?-year-olds play with money imitation. The boys took the money from the girl/denied her the money when she was excited to play.
My argument is that a $3/day (rent+food) secure place with cool nice people who normatively enjoy cooperative progress is better than a $10/day place which is less secure, it can be argued that families are not as loving, relationships not as respectful, and mutual support and inspiration to learn is limited.
This is to illustrate how it can be argued that giving individuals money can have limited impact, if the normative environment is not up to speed. (Maybe they can try to make people sign vague lease agreements with higher value, thieves can get better equipment, men can make it a reality that women do not get money, and children can continue to be rejected while receiving more expensive toys.)
We do not know how different normative environments are. The two places which I described were a walking distance apart. If you just give people money, you don't know what is going to scale up.
I am not saying that Alex is anything like a traitor or supports YIMBY for nefarious reasons. I am saying that there can be better candidates for his job. For example, I identified this Aravind Eye Care hospitals, a profitable investment, which treats blindness at a large scale and for free for 70% of patients. Or, training surgeons to do a hernia repair with a bednet ($12.88/DALY averted) can be quite suitable for a cool personal tip. A fistula surgeon in Uganda recommended transportation stipend fund for children at the risk of disability (otherwise families do not spend the $15 and the people then have issues). That can be a somewhat touching (and also highly cost-effective) recommendation. These three opportunities should also increase or prevent the decrease of wellbeing and improve productivity, in addition to improving health. And, even a stereotypical Californian could be excited about them.
This idea is not associated with one person, on the other hand, I somewhat arbitrarily used the example of Mr. Berger to criticize the broader issue that one could see: it is not that the most cost-effective interventions are identified by extensive critical dialogue with multiple affected and unaffected stakeholders, but bombastic narratives are used to trick people to keep loyalty to GiveWell without critically thinking about how they can actually benefit the world the most. (I am not arguing that if people give to literally shooting the moon, because, for example, the US military makes them excited about it, it is better than if they support GiveWell in conjuction with some projects, which can be interpreted as attention-captivating or -keeping. I am saying that if we can presume that funders actually want to hear and think with others about smart tips in Global Health and Wellbeing and do not need to be entertained by something that resembles a stereotypical popular Californian TV channel, then we should have that attitude. Sometimes, the interests of a group may seem similar to what is portrayed on the TV. But, TV does not resemble reality.)
It is disrespectful and uncaring to confirm people's biases and do not do any thinking for them, especially if it is your job.
I am not criticizing that the justification was short, or unsupported by scientific evidence, I am pointing out that impact cost-effectiveness analysis was not conducted properly, because impact was in the wrong units and cost was not considered (and compared to other programs that bring comparable benefits or it was not self-evident that this is cost-effective like magic, like high blood pressure screening in upper middle-income countries). The paper that you cite may be 'tricking' decisionmakers into giving this issue importance, because of fancy math and formal tone, but it does not say anything about cost-effectiveness. I think that it discusses the price elasticity. That is why I was suggesting the (introductory) Virtual Program: You need to consider impact and support projects on the tail of cost-effectiveness is maybe Day 2 after introductions.
I do not think that the productivity is affected, because it (possibly) pushes away low-income people, who can be assumed not more productive, in real terms, in an affluent neighborhood than in a less affluent one (for example, a shop assistant does the same work in LA and some smaller city but is paid more in LA). However, if people move away, labor supply decreases, then the price of labor increases and people get higher income. The issue can be that high-income people then need to pay more for relatively low-skilled services, which may decrease their productivity. Thus, this may enable the redistribution of income from property owners to affluent service payers.
Is it that the house owners lobbied for this policy in the first place (clearly, property owners, who may have significant policy leverage, have not advocated for YIMBY approaches)? Or, the jurisdiction decided to limit housing to make the place more feel exclusive and attract prestige-seeking innovators? Or, are the rents at market rate and YIMBY is trying to reduce them (that would cause decreased productivity)?
Also, the impacts of the change of this policy would probably be relatively limited, maybe price decreased by 7%? This could have been resolved better by room sharing where people actually get along well, because the room is set up in that way (e. g. sound barrier) or/and they enjoy being with others. Enjoying being with others increases wellbeing and may be also associated with increases in health. What about dignity or fanciness for people who stay in a very small share of the room (fancy pods - the LED lights cost dollars)? That could solve the problem with much higher cost-effectiveness. People would be cooperating more - innovativeness and productivity would increase. Room sharing could be even welcome by both affluent service buyers and property owners (who could benefit from higher total if they manage to fill a room with people who get along well and each pays more than rent/the number of tenants).
Why is camping not the best economic outcome? If low-income people, instead of paying already affluent property owners stay for free, then that is effectively redistribution, which creates utility, according to the logarithmic model. Issues that are associated with camping may be not the best economic outcome. For example, if people are disturbed from studying by cars or do not have lights. That could be resolved by earplugs and solar lamp. If people are taking drugs, because they are nudged into it on the streets, that could be resolved by relevant programs, such as nudging into commercial upskilling; cool sports/dance or any other art or physical activity self-development; relationship building based on mutual understanding, respect, care, and love; or drug cessation therapy. Even the highly productive people would just pay these people to be there, dancing in a way that leads the emotion very well, according to the busy passerby's liking. So, some of the campers would remain economically unproductive but their (extra-economic) contribution to wellbeing would be priceless.
With regards to the example of your criticism, I think that the book is trying to make you do exactly that: come up 'yourself' with the idea that we need to think about issues now, so that we can solve them. So, even though you may be indirectly criticizing the author's (or their collaborators') narrative, you are not criticizing the author's approach itself (because they are in control of how they want to contribute to the advancement of EA thinking - get people behave predictably or encourage them to develop innovative solutions now).
Actually, this thinking about your criticism makes me wonder:
Maybe it is necessary to criticize Mr. Berger.
Thank you! I think quantitative approaches should be given greater attention.
1) Are you interested in increasing diversity of the longtermist community? If so, alongside what lines?
One possibility is to increase shares of minorities according to US Census Bureau topics: race, sex, age, education, income, etc. Ways of thinking about EA, one's (static or dynamic) comparative advantages, or roles naturally/nurturally taken in a team would be irrelevant. The advantage of this diversification is its (type 1 thinking) acceptance/endorsement among some decisionmaking environments in EA, such as the Bay Area or London. The disadvantage is that diversity of perspectives may not necessarily be gained (for example, students of different race, sex, and parents' income studying at the same school may think alike).
Another possibility is to focus on the ways of thinking about EA, one's current comparative advantage and that which they can uniquely develop, and roles that they currently or prospectively enjoy. In this case, Census-type demographics would be disregarded. The disadvantage is that diversity might not be apparent (for example, affluent white people, predominantly males, who think in very different ways about the long-term future and work well together could constitute the majority of community members). The advantage is that things would get done and different perspectives considered.
These two options can be combined in a narrative-actual or actual-narrative ways: Census-type diversity could be an instrument for thinking/action/roles diversity, while only the former is narrated publicly. Or, vice versa, people of various thinking/comparative advantages/preferred roles would be attracted to increase Census-type fractions. Is either necessary or a great way to mitigate reputational loss risk? Do you have an available strategy on the longtermist community growth?
2) Is it possible to apply for a grant without collaborators but with a relevant experience or strategy of finding them?
For example, can one apply if they had previously advertised and interviewed others for a similar EA-related opportunity but have not initiated an advertisement process for the application?
Do you award grants or vary their amount conditional on others' interest? For example, is it possible to apply for a range depending on a collaborator's compensation preference or experience? Is it possible to forgo a grant if no qualified candidate is interested?
This is so cool. I had a similar idea about an ethical game a while ago! The idea was that:
- The objective is to improve decisionmakers' ethics
- More points are gained for impact-maximization decisions in places and at times of large important meetings
- The game settings/new developments are unrelated to the actual meetings but inspire thinking alongside similar lines
- At places and at times without large important meetings, on the other hand, points are gained for more deontological and active-listening-based decisions - the greater diversity of places of engagement, the better
- This should motivate the consideration of a broader variety of groups, also though confirming that individuals should be nice to others
- More points are gained for impact-maximization decisions in places and at times of large important meetings
- Traditional social hierarchy shortcuts are played with in the design
- For example, any gender person or entity can save another entity from a tower/pond/etc, if that task is included in the game
- Authority characters exhibit some of the same body language as traditional and non-traditional authorities but are of any identities (traditionally more and less powerful, such as people of any gender, race, and background) who express themselves individually
- Body shaming is entirely replaced by spirit and skill-based judgment but it is still possible to in some cases confirm one's biases about body hierarchies
- Hierarchies related to territory, self and partners' objectification according to commerce, disregard in intimacy, ownership of items expensive due to marketing not function, fight that hurts someone, gaining attention by threat, showcasing unapproachability, and other negative standards are not used to motivate players' progress or present a hierarchy - there is not really a hierarchy since the game is cooperative
- These hierarchies can be used for critical engagement/discourse
- The environment and tasks are continuously created, also by the players
- Players gain points/perks for suggesting quests and settings that motivate impact-maximization decisionmaking and active listening to a diversity of individuals
- The explicit objective point/perk award criteria includes an ethical 'passing' standard (relatively easy to get approved by friends, as long as one is friends from at least someone from various teams/groups/experience) but is otherwise based on something exclusively game-relevant (such as the number of blocks used)
- The developers check on the ethical developments and intervene as necessary
- For example, if a new ethical norm that was just accepted starts being overemphasized, as if to make a point by some groups, an interesting less ethics-intense challenge is introduced
- If the dark triad traits become prominent among malevolent actors, points are associated with actions that counter the reinforcement of these traits
- If anything becomes too repetitive or boring, new possibilities of playing are introduced
- Players gain points/perks for suggesting quests and settings that motivate impact-maximization decisionmaking and active listening to a diversity of individuals
- Friendships are formed
- Players can participate in various teams at the same time. There is no better and worse affiliation, point maximization depends on one's skills. Players can change affiliations freely, which can be beneficial to their score.
- Chat function is engaging and concisely informative, providing the delight of having all info available in a useful format. Sincere reactions can be exhibited (rather than e. g. stickers or memes that confirm biases or optimize for non-critical engagement)
- Players can be recognized at large decisionmaker meetings and outside.
- Coding challenges
- Make it difficult to trick the GPS
- Or not, if there may be a sufficiently small number of sufficiently cool non-decisionmaker players who can inspire the decisionmakers
- Make it difficult to trick the GPS
Feel free to use this for inspiration.
Are you soliciting ideas for the games in any way? For example, will you have Essay Contests or ideation days? There may be high interest from the EA community.
Another question is if you seek to actually engage the players in the alignment or more so make them comfortable so that you can slip any thinking to them, even if they 'wanted spaceships and it is animal welfare?'
For example, to acquire a bounty pirates have to critically engage parrots while finding a way to make swords when iron is not on the map.
This can be very entertaining to the attendees of the OPEC and non-OPEC Ministerial Meeting, if it seemed that everyone is parroting phrases. The no natural resource on the map can be a fun way to attract attention in a kind way and gain friendly understanding of fellow Meeting participants. This is a hypothetical example.
The way to motivate the decisionmakers to engage non-humans can be through analogous game challenges (this blob flying around you is trying to communicate something - what do you do to understand?) or marking some places with those who understand non-humans (e. g. neuroscience researchers or sanctuary farmers) as high-point for active-listening decisionmaking.
For example, leaning on a table with one's fingers or including someone in their seat
I am not sure if I am emotionally explaining the difference adequately, but this relates to the feeling 1) from the stomach up, palms going up, the person seeks to engage and is positively stimulated or 2) slight relaxation in the lower back, hands close, the person seeks to repeat ideas and avoid personal interaction.
Engaging the players may be necessary, otherwise problems that need extensive engagement will not get resolved and efficiency may be much lower compared to when everyone actually tries to solve the overall inclusive alignment and continue to optimize for greater wellbeing, efficiencies, and other important objectives.
The example is that a 60-hen cage can be better for chickens than open barns (according to EconTalk) - and that is just one aspect of life of one of the almost 9 million species and many more individuals. If people were to be 'tricked' into opening cages, a lot would remain unresolved.
The discussion can be more unified (interpreted as organized with better-searchable ideas) if the comments are in-line and one does not need to search (the same) quotes and their responses in the comments. One would look in-line for comments relevant to the quotes that they like/seek to discuss or learn further perspectives on and under the article they would look for general comments. This is similar to how one would comment on a Google Docs draft that someone asked them to proofread.
Possibly, most commented on quotes could be highlighted - 'community highlighting.' By number of comments, their length, or post part upvote. Are there any bias confirmation/perpetuation on first-come basis risks?
I wonder what searchability (of annotations and linked notes) optimal for the Forum would be. Currently, it seems somewhat difficult to search articles by keyword by the Forum search function, because of the recommendation algorithm that may disproportionately show specific posts.
Can this be not only comments but also upvotes/downvotes (as you suggest with '+1'), questions, and polls relevant to specific parts, quotes, or sections of the post?
One could find it easier to orient themselves in the community responses to different parts of the text when they can hover over a highlighted part and see its karma and reactions. The reactions could also be categorized and users could choose to see only some type of reactions (e. g. not on typos or clarification questions or polls but yes on complementary or contradictory evidence, challenging questions, and idea advancement).
The community rather than the author should select segment that they wish to comment on. Otherwise, the author could 'hide' a contentious conclusion in a generally agreeable block of text. However, this has the disadvantage that someone can be responding to the key word in the sentence and another person to the entire sentence. Then, comments that could be consolidated would be split, which would reduce the text orientation efficiency.
I have not seen this on the EA Forum feature suggestion thread, which you may be interested in mentioning it on.
It seems alarming that GiveWell bases their significant donation recommendations only on one study that, furthermore, does not seem to understand beneficiaries' perspectives but rather estimates metrics that relate to performance within hierarchies that historically privileged people set up: school attendance, hours worked, and income.
GiveWell’s reports should align more closely with academic norms where authors are expected to fully explain their data, methods, and analysis, as well as the factors that their conclusions are sensitive to
I disagree that GiveWell's reports should align more closely with academic norms, because these norms do not engage intended beneficiaries.
Explanations can help differentiate the actually most helpful programs from those made prestige by big/small numbers and convoluted analyses.
Allowing GiveWell's audience tweak the factors and see how conclusions change would show the organization's confidence in their (moral) judgments.
'Data' should not be confused with 'numbers.' Focus group data may be invaluable compared to quantitative estimates when a solution to a complex problem is being found.
The only evidence GiveWell uses to estimate the long-term effects of deworming comes from a study of the Primary School Deworming Project (PSDP) using the Kenya Life Panel Survey (KLPS) (Miguel & Kremer, 2004) and its follow-ups (Baird et al., 2016; Hamory et al., 2021). (HLI, Appendix: Calculations of Deworming Decay )
School curricula in developing contexts may include post-colonial legacy, select elites while leaving most behind, or optimize for raising industrial workforce that may prevent global value chain advancement of industrializing nations but make the countries an instrument for affordable consumption of foreign-made goods.
I am unsure whether unpaid domestic and care work was considered within hours worked - excluding this would imply greater value of paid over unpaid work, a standard set up by the historically privileged.
Zotero creates a bibliography if you click on all the links and then click on the browser extension icon on each page. It does not always work perfectly - but e. g. data from academic articles get usually copied well.
OK! I cannot find #Title on LessWrong but based on your description it seems analogous to linking a post or using a tag?
If a user is a fan of someone who they do not have an actual connection with (usually did not meet in person for 1-on-1 and have not shared common interests), they would use the professional tag (for example, one could tag Joel McGuire if they write something that they think that he would find useful, based on his posts). The friendly tag (that has to be authorized by the tagged person) should be used when people are confident that they know their friend's interests so well that they would recommend something that the friend would enjoy (while they may also find it useful). So, the intent difference is inform based on the user's professional presentation vs. notify of enjoyable content based on the users' friendly connection.
Tagging users to notify them (@[username]). People should be able to ‘authorize’ friendly tags but ‘professional’ tags should be possible by default. Users should be able to turn on-off notifications for ‘friendly’ and ‘professional’ tags. In this way, people could make and maintain connections via the Forum.
Also, orgs (or departments) could have their own tags. For example, if someone does not make a writing contest deadline, they should still be able to notify the org about an idea. Organizations could be also able to filter their tag and another set of tags or keywords (for example, 'Open Philanthropy, Worldview Diversification, DALY' could allow an OPP researcher to skim collective intelligence related to their calculation methodology and possibly delegate further research to people who had thought about it already).
I was just about to suggest that. Reasoning explanations behind a vote could be also valuable.
Should max upvote be associated also with factors other than user karma, such as self-assessed professional expertise (according to broad criteria)? For example, someone who works in the EU Commission on Internet of Things could assess themselves as an ‘expert’ on a question that relates to valuable actions related to a new draft of the EU AI White Paper.
Voting can also seek to ameliorate biases by highlighting underrepresented perspectives. For instance, if there is a poll about priorities related to wild animal welfare, the vote of an AI safety researcher could be weighted more heavily if the majority of other votes are of wild animal welfare researchers. Voters’ organizational affiliations, professional and cause area expertise, and relevant demographics could be considered.
Unnecessary positive discrimination should be avoided. For instance, US college graduate male and female votes on an issue that does not relate to gender or gender norms should be weighted the same while the vote of Afghani women should be weighted more than that of Afghani men on any Afghanistan-related topic. This is based on the assumptions of equal opportunities for male and female students at US colleges but historically and currently unequal decisionmaking opportunities for women and men in Afghanistan.
For sure! I think so and actually I am thinking more axes could be used - for example, one scale for 'relaxation' other for 'pain' other for 'energy' etc.
But is only computational sentience computational? As in the ability to make decisions based on logic - but not making decisions based on instinct - e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don't. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and 'only' superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.
Elitism in EA usually manifests as a strong preference for hiring and funding people from top universities, companies, and other institutions where social power, competence, and wealth tend to concentrate.
What do you mean by competence? Is it the skills, knowledge, connections, and presentation that advance these institutions? Does the advancement include EA-related innovation? Is this competence generalizable to EA-related projects?
Is social power the influence over acceptable norms due to representing that institution or having an identity that motivates others to make a mental shortcut for such 'deference to authority'? Could social power be gained without appealing to traditional power-related biases?
Traits that elitism tends to select against (or neutral) ... - Critical thinking
Critical thinking in solving problems related to achieving the institutions' objectives are supported while critical engagement with these objectives may be deselected against. This also implies that no one thinks about the objectives, which can be boring/make people feel lacking meaning: companies could be glad to entertain conversations about the various possible objectives.
Traits that elitism tends to select against (or neutral) ... - Altruism/desire to help others
Effective altruism - desire to help others the most while valuing all, even those outside of one's immediate circles, more equally. Elite decisionmaking is to an extent based on favors and dynamics among friends and colleagues.
Traits that elitism tends to select for - Ambition/desire for power
I'd say acceptance/internalization of the specific traditional hierarchical structure and understanding oneself as competent to progress within this structure.
In EA, there’s a pretty solid correlation between people who have started big and impactful projects and their origins in elite environments (Sam Bankman-Fried, Will MacAskill, Holden Karnofsky, etc.). Some of the most successful companies in the world (e.g. Google, Apple, Paypal) have historically also been quite selective and operate within a sphere of prestige.
I am assuming that you are assuming the 'eliteness' metric as a sum of school name, parents' income, and Western background? Please reduce my bias.
Is the correlation apparent? For example, imagine that instead of (elite) Rob Mather gaining billions for a bednet charity a (non-elite) thoughtful person with high school education and $5/day started organizing their (also non-elite) friends talking about cost-effective solutions to all issues in sub-Saharan Africa in 2004 and was gaining the billions since, as solutions were developed. Maybe, many more problems would have been solved better.
Counter-examples (started big and impactful projects from non-elite background) may include Karolina Sarek, William Foege (Wiki), and Jack Rafferty. It can be interesting to see this percentage in the context of the % of elite vs. non-elite people in EA (%started impactful projects from elite/%elite in EA)/(%started impactful projects from non-elite/%non-elite in EA). Further insights on the relative success of top vs. median elite talent can be gained by controlling for equal opportunities (which can be currently assumed if funding is awarded on the basis of competence).
It’s far easier to consider earning to give if you’re making $100k+ a year.
So, while EA was funding constrained, it used to make sense to attract elites. Now, this argument applies to a lesser extent.
It can be incredibly demotivating being told that your potential for impact is far less than a select few.
Unless it is true, such as if impact is interpreted as representing an institution that aspires for normative change, in which case you realize that speaking with elite people in an elite way is not really for you anyway and do something else, such as running projects or developing ideas. This is an equal dynamic where potential for impact is a phrase.
Recruiting from the same 10-20 universities who all have similar demographics makes it more likely to end up engaging in groupthink.
Thinking diversity norms can be more influential in having vs. not having issues with groupthink than the composition of the group, considering that people interact with others. For example, if the norm is prototyping solutions with intended beneficiaries, engaging them in solving the issues and stating their priorities in a way which mitigates experimenter bias and motivates thoughtful sincerity, and considering a maximally expanded moral circle, then the quality of solutions should not be reduced if people from only 10-20 schools are involved. On the other hand, if the norm is, for instance, that everyone reads the same material and is somewhat motivated to donate to GiveWell and spread the word, then even a diverse group engages in groupthink.
Prestige doesn’t select for people who want to do the most good. This can be counteracted by recruitment processes that select more heavily for altruism and the self-selection effects of EA as a movement, but given the importance of strong value-alignment within EA, this is potentially damaging in the long-term.
Prestige selects for people of whom the highest share wants to do the most good when being offered reasoning and evidence on opportunities, at least if prestige is interpreted as such. Imagine, for instance, a catering professional being presented with evidence on doing the most good by vegan lunches. Their normative background may not much allow for impact consideration if that would mean forgone profit, unless it does. If EA should keep value by altruistic rather than other (e. g. financial) motivation, then recruitment should attract altruistic people who want to be effective and discourage others.
So, it depends on the senior-level positions. If you want to make changes in an authoritarian government, an (elite) insider will be very helpful. Similarly, a (non-elite) insider would be helpful if they need to develop solutions within a non-elite context, such as solve priorities in Ghana under $100m. It does not matter if normative solution developers (such as AI strategy researchers) are elite or not, as long as they understand and equally weigh everyone's interests. Positive discrimination for roles that elites may have better background in (e. g. due to specialized school programs), such as technical AI safety research, may be counterproductive to the success of the area, because less competent people would lead the organizations and since the limited number of applicants from non-elite roles is not caused by unwelcomingness but limited opportunities to develop background skills, positive discrimination would not further increase diversity.
Complementarity can be considered. For example, someone who can find the >$100m priorities in Ghana and someone who can get the amount needed. However, own network funding can also prevent the entire network fund a much better project in the future, so not all elite people should be supported in advancing their own projects, since there is so relatively many elites and so few elite networks - unless offering an opportunity to fund a relatively less unusual project first enables the support of a more unusual (and impactful) project later. If the project objective is well-defined and people receive training, then anyone who can understand the training and will make sure that it gets done can qualify.
“The original Mac team taught me that A-plus players like to work together, and they don't like it if you tolerate B-grade work.” — Steve Jobs
You are grading 'playing with Macs.' I think Bill Gates dropped out of college. And, just based on these two examples - if you compare their philanthropy ... This means that whoever is not cool cannot participate? Also, if students get used to upskilling others (and tolerating or benefiting from that), then EA can get less skills-constrained later and create more valuable opportunities for the engagement of people who score around the 70th(95th) percentile on standardized exams.
Field-specific conferences—such as an AI safety or a biosecurity conference—benefit from restricting the conference to those with expertise. This ensures that everyone in attendance can contribute to the conversations or otherwise will benefit greatly from being exposed to the content.
While a biosecurity conference should probably only 'benefit' people who are 'vetted' by elite (if so defined) institutions that they will not actually think about making pathogens since biosecurity is currently relatively limited, an AI safety conference can be somewhat more inclusive in including 'possibly risky' people. This assumes that making an unaligned superintelligence is much more difficult than creating a pathogen.
AI safety conferences should exclude people who would make the field non-prestigious/without the spirit of 'the solution to a great risk,' for example, seem like an appeal of online media users for the platforms to reduce biases in the algorithms because they are affecting them negatively. Perhaps even more so than one's elite background, the ability to keep up that spirit can be correlated with traditionally empowered personal identity (such as gender and race) and internalization of these norms of power (rather than critical thinking about them). Not everyone with that ability of 'upholding a unique solution narrative' must be from that demographic and not everyone has to have this ability in that group (only a critical mass has to). This applies as long as people negatively affected by traditional power structures perceive a negative emotion which would prevent them from presenting objective evidence and reasoning to decisionmakers.
Project funding and entrepreneurship
So everything except community building and entry-level employment? Should there be community building in non-elite contexts (while elites (in some way) within or beyond these contexts may or may not be preferred)? A counterargument is similar to the AI safety 'spirit' one above: people would be considered suffering by disempowerment and thus appeal less effectively and to your standards one: people who would slack with Bs in impact would just be ok with some problems unresolved. Arguments for include epistemic, problem awareness, and solution-relevant insights diversity and facilitating mutually beneficial cooperation (e. g. elites gain the wellbeing of people who have more time for developing non-strategic relationships and non-elites gain the standards of perfecting solutions), in EA and as project outcomes.
It may depend on the org. Some orgs (e. g. high-profile fundraising) that generally prefer people from elite backgrounds can prefer them also for entry-level positions. This can be accounting for the 'As are disgraced by Bs and would not do a favor for them since they do not gain acknowledgement from other As but can be perceived as weak or socially uncompetitive' argument of the 'target audiences' of these orgs.
If doing nothing and waiting for social norms to change is appropriate, non-elites should excluded from these entry-level roles. The org can actively change the norms by training non-elites to resemble elites (which can be suboptimal due to exhibiting the acceptance of the elite standard, which is (thus) exclusive) or by accepting anyone who can make the target audience elites realize that their standard is not absolute. In that case, the eliteness of one's background should not contribute to hiring decisions.
EAGx conferences and some EAGs
Depending on the attitude of the key decisionmakers at EAGs/EAGxs, such as large funders, eliteness should be preferred, not a selection criterion, or dis-preferred. It is possible that anyone who demonstrates willingness and potential to make high impact can be considered elite in this context.
For example, traits such as critical thinking and a sharp intuition are useful for generalists.
Is it that elites have less sharp intuition than non-elites? An argument for is that elites are in their positions because they reflect the values of their institution without emotional issues, which requires the reduction of one's intuitive reasoning. If an institution values critical thinking, gaining information from a diversity of sources, and forming opinions without considerations of one's acceptance in traditional hierarchies, then elites can develop intuition.
What are the arguments/evidence for low social recognition of work outside of EA orgs?
Working for the government with an EA mindset should be recognized. Some other types of work outside of EA orgs are not well recognized but should be. EA-related opportunities in all non-EA-labeled orgs can be always considered alongside moving to EA-labeled orgs based on marginal value.
For example, if someone works in an area as seemingly unrelated to EA as backend coding for a food delivery app, they can see if they can make an algorithm that makes vegan food more appealing, learn anything generalizable to AI safety that they can share with decisionmakers who would have otherwise not thought of the idea, gain customers by selling hunger banquet tickets, help the company sell their environmental impact through outcompeting electric scooter delivery by purchasing the much more cost-effective Founders Pledge environmental package in bulk, add some catchy discounts for healthy-food alternative to smoking for at-risk youth users, etc - plus donate to different projects which can address important issues - and compare that to their estimate of impact of switching to an EA org (e. g. full-time AI safety research or vegan food advocacy).
for funders to estimate the amount of money they would have given to the organisation over a reasonably long period of time and provide that amount (potentially plus a bonus for honesty) to the board/staff regardless.
Do you think orgs do not bring some evidence to grantmakers in order to gain funding and this would resolve the issue? Depending on the jurisdiction, there may be laws associated with laying off employees, which include salary for several months to enable the person to find employment or government unemployment schemes. Do you think grantmakers make decisions based on perceived employee insecurity rather than cost-effectiveness? What are the decisionmaking processes that make it so that relatively cost-ineffective projects continue to be funded? Should employees of EA-related orgs that do not provide funding and government funding is not available be encouraged to have several months of savings around grant renewal decision times?
Prima facie, the norm against long-term projects and employment sounds quite 'effectiveness/efficiency-decreasing' but it may just be a bias based on limited experience with this option.
Long-term projects, if that is meant as funding renewal security, are not the norm in EA. Funding is renewed periodically, based on the most competitive opportunities at any time. Any lower marginal costs of established projects' unit output is taken into account in funding new and existing ones.
Long-term paid employment security is greater than that of projects. Organizations may prefer applicants who are willing to work for the org for a considerable time. This can be because the returns of training for that org and relationship-development aspect of some roles.
A scheme where orgs cooperate in both skills training and relationship development can expedite innovation (skills can complement each other) and improve decisionmakers' experiences (they are trusted in resolving problems based on various insights rather than one-sidedly 'lobbied' to make specific decisions).
Non-EA orgs should also be involved, for the development of general skills that could be a suboptimal use of EA-related orgs' time to train and of relationships that can be necessary for some EA-related projects.
1) Which books? There should be easily a 100 books related to EA (more and less broadly).
2) Some of the books are thought stimulating (the value is readers' contemplation about unanswered questions), some informative (present valuable info that is useful to problem solving), some directly motivational (get readers focus on an important problem), and some vaguely motivational (inspire people to do good by discussing other topics).
I think summaries could be counterproductive for vaguely motivational books but could be also improving otherwise readers' experience , because if one enjoys reading and realizing that they can do good, they can feel better or worse about it than if they skip this step and go straight into reviewing an option of doing good effectively.
For directly motivational books, summaries should be the most valuable (people should be informed about important issues) but the summaries should be published only after various pressing issues are covered (for example, if only wild animal welfare books are summarized then people who do not want to read the entire text could focus on this area and make it so that AI safety has attention disproportionate to the marginal value/need).
Thought stimulating books could be actually discussed without (everyone) reading because that diversifies perspectives.
Summarizing info that helps solve problems can be valuable to anyone who is resolving the issues.
option of doing this on demand.
I think EA orgs would enjoy it if that helps them in what they do. Some people may be interested in having different people read books and then summarize evidence and reasoning on some questions. Then, one person/org can benefit from knowledge and thinking of multiple people and be thus more efficient.
One fun way could be maybe 2-hour block at events (4*30 mins). Virtual events are another option (more affordable than in-person, can take longer because attention is not so scarce). For a more chilled atmosphere, maybe a weekend retreat where a third of the program is this scenario and ideally multiple groups run it at the same time, to combine insights and meet more friends.
'Commenting sprees' - blocks of time where discussion with more immediate replies would be encouraged.
for 6 or 7 of the 8 meetings it was just me and the facilitator on a video chat
Wow, maybe the number of people in a group could be somewhat increased.
The biggest benefit for me was the ability to bounce ideas around and have an instant response/reply, shortening the feedback loop compared to simply reading and Googling around on my own.
Ok, ah hah maybe 'commenting sprees' could be implemented in a otherwise this aspect can be difficult to imitate in a written form.
my impression is that very few people have the comprehensive expertise to think well about these things.
Do you mean people in general? Or, in EA/neuroscience/consciousness research, ...
In humans and other apes, the cerebral cortex is also widely believed to be largely responsible for generating our conscious experiences.
Could you share any resources that suggest otherwise? It could be also interesting to seem them on a timeline.
A human without a cerebral cortex is, for the most part, a living body with no mind.
If 'mind' is interpreted as "awareness and intentional activity" and "body regulation and arousal" is not considered 'living.'
The behavior of decorticate rats is remarkably unaffected by extensive damage or removal of their cerebral cortex.
So, it can be argued that healthy rats are unaware and unintentional and thus do not live?
Various social changes
Notably, inferior maternal care
Is it that what makes us 'a species with a prominent cortex' is care (I mean the reverse of causality)? How does k- and r-selection relate to a species cortex properties (and to consciousness)?
Difficulty with some complex cognitive tasks
Inability to reason abstractly about locations
Absence of food hoarding behavior
Can individuals with no or relatively non-prominent cortex be really 'present in the moment and place?' Would then species with 'lesser' cortex than humans be much happier if they feel well in a place and are fed and sadder if they do not than humans do because humans 'carry' some memory and plan?
require subtle experimental designs to detect.
Out of curiosity, do you know of any designs or labs?
the behavioral effects of decortication suggest that consciousness is either restricted to primates or else it is distributed fairly widely outside mammals.
Only rats were studied so conclusions about various brains and nervous systems cannot be stated. Is it that this reasoning could suggest that primates would be more conscious than other species but not that these species would be non-conscious because cortex does affect rats' behavior somewhat? Also, even if the decorticate rat behaves similarly as one with cortex, it can be that it is less conscious, for example cannot feel closeness with family as much on in specific ways?
Alongside the lines of 4), it could be argued that humans have some consciousness in different parts of the brain, nervous system, cells, and other parts of the body, depending on the definition of consciousness. Thus, humans can 'empathize' with different species by focusing on that part/activating it and deactivating others. For example, species that experience "arousal" but not "intentional activity" can be empathized with by focusing the former while seeking to block the latter.
Are you aware of the Cambridge Declaration on Consciousness? What do you think about it?
A critical reading is that this seems that most of the people would motivate chatters to take careers steps which may advance some of the projects that they or others have in mind.
Is this the objective (definitely can be worthwhile), while other avenues should be found for discussing solutions to problems that these professionals focus on?
The two people who so far commented about being willing to chat seem interested in brainstorming solutions (and unstructured chatting) in addition to sharing more one-sided advice.
Before I noticed these comments I meant to suggest renaming the post to something like 'Not sure about your next career steps and getting bored? Book a chat with an EA professional!'. With the comments, the expectations in the chat should be covered by the topics list.
These are just random and provocative thoughts.
"well, the world sucks, but..."
Maybe some of the people who you meet consider themselves with limited agency so express aggression and acknowledgement of the state of affairs? Or, they agree with the system that they live in, even if it allows some people to make profit (if the sentence continues 'but what can you do, you need to make people [like themselves] pay high rent since then if you are [their job] you can benefit from [what they do]'). Or, they acknowledge the sentiment which is that one could be interested in leaving their situation but express inability or limited opportunities to do so ('but it's the same everywhere'). Or, they perceive a 'threat' of being deprived of their privilege if they focus on improving global issues ('but it's actually ok'/'but I also have issues that I need help with'/'but improving the situation would require a change the way I think which is challenging').
lots of people in the EA orbit are persistently unhappy
Could you quantify or explain lots, persistently, and unhappy?
And the solution to being persistently unhappy with a social arrangement or memeplex, usually, is to leave.
Are there any risks associated with leaving a system with ambitions of doing the most good which is suboptimal?
I am not an expert but is it that some therapies recommend re-interpretation of one's situation, communication, and if there are no other things to consider which could make it better to stay, leaving?
Famously, this often doesn't occur to the person suffering for a long time, if ever, even if it looks like the obvious correct choice from the outside.
One interpretation is that once a person is aware of global issues, others' commitment to resolve them, their opportunities to participate (grants, jobs), and their marginal value in this effort (funding overhang), then they cannot leave the memeplex of doing the most good, even if they (temporarily) leave the community, because they are compelled by need.
Another interpretation is that people who learn about EA will always think about impact to some extent, even if they leave EA. It is because the ideas make sense and they would feel somewhat bad not considering them since they learned it is 'good' to do so.
Third way to think about one's perspective on leaving EA is that people learn about EA, participate for a while, and while acknowledging that they could do a lot of good, consider that they would be happier if they left and fully focused on other things. They have the 'approval' and can reason that systemic change that would make it so that all people have better impact is needed and that there are enough brilliant people who actually choose to stay for whatever reason.
People in all three categories can be happy or unhappy to different extents. One can suggest that keeping oneself physically healthy, having good relationships, being secure, treating and preventing any mental health issues, doing what one likes, and existing in an environment that one wants to be a part of can support one's happiness. A person who 'cannot leave' can thus be very happy while a person who 'leaves and forgets' very unhappy and vice versa.
Telling people to leave instead on supporting them with their issues can reduce the community's aspect of happiness based on their valuation of the system that they live in ;)
It can be argued that people who would be better of leaving should leave even if they were 'great unique assets' but that people should be supported with whatever they need, to a reasonable extent (e. g. taking advantage of the EA mindfulness program, Effective Self-Help, or working with some of the therapists and providers recommended by EA Mental Health Navigator or with the AI Safety Support Health Coach). Possibly, few people would argue that the relatively little resources spent on mental health support in the community are excessive. It could be great even if people use there 'free' resources and then leave the community, if they are thus happier. Of course, one can also take advantage of non-EA therapists and resources.
What do I mean by leaving?
I agree that non-EA friends can be fun. Even a person who is (momentarily) highly influenced by thinking about impact, who would probably feel bad about hanging out with, for example, oil investor inconsiderate about the environment, can be fine with most people, and even learn that issues are not so dark or bleak as could be prominent in some EA narratives (e. g. learning from an engineer that some software is quite ok).
Why should one not apply for EA jobs? Is it to save themselves time which they might not have (e. g. would be sacrificing on social or financial)? Otherwise, applying for any jobs which are well compensated can make sense for financial needs.
But there is no one understanding of self-worth in EA, just like there is no clear definition of EA. Should there be one understanding? For example, something which would take into account self-care as well as contribution by various means, with respect to one's potential? Elliot Olds could be interested in this as he posted something about an impact list.
I keep seeing people be nervous to like, seriously criticize EA, because their (aspirational) livelihoods depend on it.
Hopefully their number is reducing? This piece to an extent sets an openness standard, the Future Fund has been gathering ideas, criticism, and ideas on how to gain criticism since the beginning, OPP is also asking what they can be better off funding, and Criticism is even contested in.
Give the cool kids a few chances to let you in.
There is no coolness scale ..........
Has bursts of manic energy and gets excited about projects, but loses initiative when nobody really supports them.
There was an article related to this. I hope EA content does not cause mania. Nothing on 'memetic content' was mentioned as a trigger.
You might have less impact.
Or you might not!
It can be argued that one might not because what the EA community can offer for one's choice of 'high benevolence' is nothing, besides perhaps the community. It can be better if a very large majority of people want to be in the community or are somewhat in control of and supported in their in-and-out vacillations.
The people who seem to have had the greatest impact in history - Bourlag and Petrov, for instance - just sort of were in the right place at the right time with the right interests.
Hm, just a hypothesis, what could be causing any mania in EA could be highlighting heroes who were at the right place at the right time with an interest in impact and narrating opportunities in EA in a similar way. This issue can be mitigated by suggesting that these individuals were parts of institutions that made it so that they made the actions and decisions they did. Plus, that there are many people unacknowledged for their contributions, due to generally, social structures which motivate people to work for them as they seek status. It is really cooperation that makes representatives succeed.
Petrov later indicated that the influences on his decision included that he had been told a US strike would be all-out, so five missiles seemed an illogical start ... He felt that his civilian training helped him make the right decision. He said that his colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile launch if they had been on his shift. (Wikipedia)
So, the individuals who made it so that the person who told Petrov about the likely extent of the strike and other people who he encountered during his training which improved his decisionmaking abilities, and possibly his superiors and their superiors who made it so that Petrov trusted that he is supported in independent decisionmaking can all be credited for this decision.
But it's interesting that inner core EA activities like reducing AI risk or community building at top universities just seem... weird and onanistic from the outside.
One can suggest that this can be addressed. Narratives which make people seem unfriendly or weird can be reconsidered. For example, AI risk could be narrated as ameliorating human biases with respect to the law and improving human institutions based on biases detected by AI, but it would be a mistake to exclude the variety of other issues that AI safety addresses, such as the potential of and risks associated with an intergalactic expansion.
An argument against discouraging 'the general public' from paying attention to some higher-risk issues, such as AI safety or biosecurity, is that this could increase risk. When structures that support positive development with increased public interest are not in place, it is better that EA core activities are just not discussed much in public, due to emotions about them. This could be rationalized and perhaps thus improve any negative emotions.
Community building at top universities can be another example of a strategic consideration. It may be better to first include top problem solvers and only after people less good at it. A group started by non-top problem solvers could address issues relatively worse.
That being said, you probably have some instinct of whether involvement with organized EA is making you unhappy.
What do you mean by 'organized' EA? Are some resources/events/people more 'organized' than others? Could you share some examples?
Those are bad signs.
One can agree. Is it that the first two could be addressed by therapy focused on emotions while the latter two by reason?
OK! You mean super-healthy as resilient to biological illnesses or perhaps processes (such as aging).
Nanobots would probably work but mind uploading could be easier since biological bodies would not need to be kept up.
While physical illness would not be possible in the digital world, mental health issues could occur. There should be a way to isolate only positive emotions. But, I still think that actions could be performed and emotions exhibited but nothing would be felt by entities that do not have structures similar to those in human brain that biologically/chemically process emotions. Do you think that a silicon-based machine that incorporates specific chemical structures could be sentient?
Ah, I think there is nothing beyond 'healthy.' Once one is unaffected by external and internal biological matters, they are healthy. Traditional physical competition would probably not make sense in the digital world. For example, high jump. But, humans could suffer digital viruses, which could be perhaps worse than the biological ones. But then, how would you differentiate a digital virus from an interaction, if both would change some aspects of the code or parameters?
OK, but did the discussion Programs add something that you would not get from reading the Forum, such as nicer experience (or motivation to meet more people in person) or ideas on topics that could be also interesting to your colleagues? If so, then should there be an option without these aspects (e. g. for people less interested in more 'dynamics navigation' focused chats)?
Also, it could be argued that the fraction of material that people who do not participate in a discussion-based program would not read can be crucial to people's understanding of EA. But, early-on specialization of people can be optimal. For example, consider that a person interested in global development never reads any insect welfare texts. They think about massively scaling up insect farming to enable people escape poverty. They address skepticism from insect welfare researchers by assurance of positive welfare. Thus, the researchers are motivated to find a solution optimal for humans, and insects. If everyone read introductory texts from both (all) areas, it is possible that the thought that would lead to this mutually beneficial solution would not have been developed.
Thanks! Yeah that makes sense.
Yes! .. thank you. I think maybe there can be some organized page of summaries that people going though a 'fellowship' can update - so an aspect of the Wiki. Otherwise, just writing a comment or a comment on a comment can be a good way to demonstrate that one thought about the topics. Or, forming several narratives of the articles can be nice (the activity where anyone writes the next sentence).
Thank you for pointing out the overlap. I can come up only with organization according to a vector space where the elements are the extent to which the article relates to specific topics but it would be nice to have something with better flow and with paths (with intersections) that would lead one to go for a bit at a time.
A megathread would not solve the organization issue and could feel like the thoughts developed are not being utilized. Multiple smaller threads can be cool, but mostly for questions that are actually advanced by discussion or for those that can be interesting to get opinions on (not e. g. asking someone to rephrase main points). Stickied questions under tags may be a solution - also once a question is somewhat resolved or opinions at the time gathered, it can be replaced.
yes! that's right I was off by a factor of 10 ..
ok, as you wish.
Oh yeah, that makes sense. And if humans can't imagine what super-healthy is then they need to defer to AGI - but should not misspecify what they meant ..
Since you define worldview as a "set of ... beliefs that favor a certain kind of giving," then it matters whether you understand income and health as "intrinsically [or] instrumentally valuable." In the latter but not the former case, if you learn that income and health do not optimize for your desired end, you would change your giving.
I am understanding investment recommendation implications as programs on education, relationship improvement, cooperation (with achievement outcomes), mental health, chronic pain reduction, happiness vs. life satisfaction research, conflict prevention and mitigation, companionship, employment, crime reduction, and democracy:
and the objective list, where wellbeing consists in various objective goods such as knowledge, love, and achievement
underweight invisible, ongoing misery (such as mental illness or chronic pain)
the best thing for improving happiness may be different from the best thing for increasing life satisfaction. Investigating this requires extra work.
other things affect our wellbeing too (such as war, loneliness, unemployment, crime, living in a democracy, etc.) and their value is not entirely reducible to effects on health or income.
Divestment recommendations can be understood as bednets in Kenya, GiveDirectly transfers to some but not other members of large proportion extremely poor communities, and Centre for Pesticide Suicide Prevention:
It’s worth pointing out that many of those whose lives are saved by the interventions that OP funds, such as anti-malaria bednets, will have a life satisfaction score below the neutral point, unless we set it at, or near to, 0/10. IDinsight’s aforementioned beneficiary preferences survey has an SWB question and found those surveyed in Kenya had an average life satisfaction score of 2.3/10.
But OP (and others) tend to ignore fairness; the aim is just to do the most good.
But, don’t happier people gain a greater benefit from an extra year of life than less happy people? If so, how can it be consistent to conclude we should account for quantity when assessing the value of saving lives, but not quality?
I understand that you disengage from replies but I am interested in OP's perspective on the 0-10 life satisfaction value at which you would invest into life satisfaction improving rather than family planning programs.
I am also wondering about your definition of health and rationale for selecting the DALY metric to represent this state.
I am not saying that international institutions prove that they can 100% prevent human-made catastrophes but I think that they have the potential, if institutions are understood as the sets of norms that govern human behavior rather than large intergovernmental organizations, such as the UN.
It may be technically easier but normatively more difficult for people to harm others, including decisionmakers causing existential catastrophes. For example, nuclear proliferation or bio weapons stockpiling was not extensively criticized by the public in the past, because people were having other issues and offering critical perspectives on decisionmaking was not institutionalized. Now, the public holds decisionmakers accountable to not using these weapons, by the 'sentiment of disapproval.' This is uniquely perceived by humans who act according to emotions.
People can be manipulated by the internet to an extent. This considers their general ability to comprehend consequences and make own opinion based on different perspectives. For example, if people start seeing ads on Facebook about voting for a proliferation proponent that appeal to their aggression/use biases to solicit fear and another ad shows the risks of proliferation/war and explains the personal benefits of peace, then people will likely vote for peace.
That makes sense: as an oversimplification, if AGI is trained to optimize for the expression 'extreme pain' then humans could learn to use the scale of 'pain' to denote pleasure. This would be an anti-alignment failure.
That makes a lot of sense too: I think that one's capacity to advance innovative objectives efficiently increases with improving subjective perception of the participants/employees. For example, if there is a group of people who are beaten every time they disobey orders mandated to make a torture technology and another group that makes torture regulation and fosters positive cooperation and relationship norms, the former should think less innovatively and cooperate worse than the latter. So, the regulation should be better than the aggression.
But, what if one 'quality unit' of the torture technology causes 100x more harm than one 'quality unit' of the regulatory technology can prevent. For instance, consider releasing an existing virus vs. preventing transmissions. Then, you need some institutional norms to prevent people from aggression. Ideally, there would be no so malevolent people: which can be either made by AI (e. g. recommendation algorithms for pleasure competitive with/much better than hurting others which is not pleasant just traumatizing and possibly memorable) or by humans (e. g. programs for at-risk children).
Yeah! That seems almost like an existential catastrophe if people are great (and) alive but actually suffering significantly. Considering that AI could just do that with one misspecification that somehow augments, AGI is risky. Humans would not develop in this way because they perceive so would stop doing what they dislike, if there is no 'greater manipulative power' which would compel them otherwise.
I am optimistic about humans being able to develop an AGI that improves wellbeing better than what would have happened without such technology while keeping control over it to make adjustments if they perceive a decrease in wellbeing. But, if it is not necessary, then perhaps why take the risk.
Yes, there are ways for this to go wrong. I'd not like to ingest nanobots which would be something like a worm infection but worse!
But if the AI is actually benevolent, then it could be better than humans and ASI optimized for some of their objectives, working with human impulses (for example, offering food which looks the biggest but has a suboptimal nutrient ratio and disregards animal welfare or instead of social skills course making one addicted to a platform which makes people feel worse about others).
AGI should prevent people from impulsive decisionmaking and foster rationality. It should be better, at least in the interim, than humans who could perpetuate some suboptimal characteristics. The issue is that maybe then humans would become practically AI without any apparent intervention.
But if human institutions make it so that weapons are not deployed, then this can be equivalent to an AGI 'code' of safety? Also, if AGI is deployed by malevolent humans (or those who do not know pleasure but mostly abuse), this can be worse than no AGI.