Posts

EA & LW Forum Weekly Summary (13th - 19th March 2023) 2023-03-20T04:18:16.544Z
AI Safety - 7 months of discussion in 17 minutes 2023-03-15T23:41:37.375Z
EA & LW Forum Weekly Summary (6th - 12th March 2023) 2023-03-14T03:01:06.162Z
EA & LW Forum Weekly Summary (27th Feb - 5th Mar 2023) 2023-03-06T03:18:31.741Z
EA & LW Forum Weekly Summary (20th - 26th Feb 2023) 2023-02-27T03:46:39.330Z
EA & LW Forum Weekly Summary (6th - 19th Feb 2023) 2023-02-21T00:26:32.622Z
Animal Welfare - 6 Months in 6 Minutes 2023-02-08T21:45:34.353Z
EA & LW Forum Weekly Summary (30th Jan - 5th Feb 2023) 2023-02-07T02:13:12.255Z
EA & LW Forum Weekly Summary (23rd - 29th Jan '23) 2023-01-31T00:36:14.532Z
EA & LW Forum Weekly Summary (16th - 22nd Jan '23) 2023-01-23T03:46:10.740Z
EA & LW Forum Summaries (9th Jan to 15th Jan 23') 2023-01-18T07:29:06.588Z
EA & LW Forum Summaries - Holiday Edition (19th Dec - 8th Jan) 2023-01-09T21:06:34.308Z
EA & LW Forums Weekly Summary (12th Dec - 18th Dec 22') 2022-12-20T09:49:50.787Z
EA & LW Forums Weekly Summary (5th Dec - 11th Dec 22') 2022-12-13T02:53:28.627Z
EA & LW Forums Weekly Summary (28th Nov - 4th Dec 22') 2022-12-06T09:38:15.409Z
EA & LW Forums Weekly Summary (14th Nov - 27th Nov 22') 2022-11-29T22:59:58.941Z
EA & LW Forums Weekly Summary (7th Nov - 13th Nov 22') 2022-11-16T03:04:38.401Z
EA & LW Forums Weekly Summary (31st Oct - 6th Nov 22') 2022-11-08T03:58:25.581Z
EA & LW Forums Weekly Summary (24 - 30th Oct 22') 2022-11-01T02:58:09.892Z
EA & LW Forums Weekly Summary (17 - 23 Oct 22') 2022-10-25T02:57:43.202Z
EA & LW Forums Weekly Summary (10 - 16 Oct 22') 2022-10-17T22:51:03.454Z
EA & LW Forums Weekly Summary (26 Sep - 9 Oct 22') 2022-10-10T23:58:22.977Z
EA & LW Forums Weekly Summary (19 - 25 Sep 22') 2022-09-28T20:13:00.964Z
EA & LW Forums Weekly Summary (12 - 18 Sep 22’) 2022-09-19T05:06:00.997Z
EA & LW Forums Weekly Summary (5 - 11 Sep 22’) 2022-09-12T23:21:59.293Z
EA & LW Forums Weekly Summary (28 Aug - 3 Sep 22’) 2022-09-06T10:46:03.715Z
EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) 2022-08-30T01:37:23.252Z

Comments

Comment by Zoe Williams (GreyArea) on What to think when a language model tells you it's sentient · 2023-02-25T02:14:36.296Z · EA · GW

Fixed, thanks!

Comment by Zoe Williams (GreyArea) on What to think when a language model tells you it's sentient · 2023-02-24T05:17:29.690Z · EA · GW

Post summary (feel free to suggest edits!):
Argues that statements by large language models that seem to report their internal life (eg. ‘I feel scared because I don’t know what to do’), isn't straightforward evidence either for or against the sentience of that model. As an analogy, parrots are probably sentient and very likely feel pain. But when they say ‘I feel pain’, that doesn’t mean they are in pain.


It might be possible to train systems to more accurately report if they are sentient, via removing any other incentives for saying conscious-sounding things, and training them to report their own mental states. However, this could advance dangerous capabilities like situational awareness, and training on self-reflection might also be what ends up making a system sentient.

(This will appear in this week's forum summary. If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Animal Welfare - 6 Months in 6 Minutes · 2023-02-19T21:48:52.569Z · EA · GW

Interesting question, thanks for adding this! I don't have any background in animal welfare research or the plant/cell based meat area beyond reading & chatting with people, but popped some thoughts below regardless:

My leaning would be that having both is better than just one, to provide increased choice and options to move away from traditional meats. I'm not sure I buy the fourth point - while there will be some competition between plant-based and cell-based meat, they also both compete with the currently much larger traditional meat market, and I think there are some consumers who would eat plant-based but not cell-based and vice versa. Not only taste, look, feel, and cost are relevant but also the optics and cultural connotations of each, which are quite different.

In terms of proportion of promotion efforts to each, I'm really not sure. A strategy there should probably look at how developed each tech is (so more plant-based meat promotion earlier on), uptake rates and effect of promotion (and if there's a ceiling hit where we struggle to get further uptake in a population, suggesting a new option is needed for those remaining), populations promoted to and their unique concerns / likelihood to uptake one or the other, and any tipping points or opposition that needs to be countered in a timely way for something to remain viable in a location or to get past legislative hurdles.

(Also sorry for the late reply! I was on vacation last week)

Comment by Zoe Williams (GreyArea) on We're no longer "pausing most new longtermist funding commitments" · 2023-01-31T05:06:13.677Z · EA · GW

Post summary (feel free to suggest edits!):
In November 2022, Open Philanthropy (OP) announced a soft pause on new longtermist funding commitments, while they re-evaluated their bar for funding. This is now lifted and a new bar set.

The process for setting the new bar was:

  1. Rank past grants by both OP and now-defunct FTX-associated funders, and divide these into tiers.
  2. Under the assumption of 30-50% of OP’s funding going to longtermist causes, estimate the annual spending needed to exhaust these funds in 20-50 years.
  3. Play around with what grants would have made the cut at different budget levels, and using a heavy dose of intuition come to an all-things-considered new bar.

They landed on funding everything that was ‘tier 4’ or above, and some ‘tier 5’ under certain conditions (eg. low time cost to evaluate, potentially stopping funding in future). In practice this means ~55% of OP longtermist grants over the past 18 months would have been funded under the new bar.

(This will appear in this week's forum summary. If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on New research findings on narrative in U.S. animal advocacy · 2023-01-31T00:21:03.324Z · EA · GW

Thanks, will do :)

Comment by Zoe Williams (GreyArea) on New research findings on narrative in U.S. animal advocacy · 2023-01-30T22:18:39.501Z · EA · GW

Summary of this post (feel free to suggest edits!):
Pax Fauna recently completed an 18-month study on messaging around accelerating away from animal farming in the US. The study involved literature reviews, interviews with meat eaters, and focus groups and online surveys to test messaging.

They found that most advocacy focuses on the animal, human health, and environmental harms of animal farming. However the biggest barrier to action for many people tended to be “futility” - the feeling that their actions didn’t matter, because even if they changed, the world wouldn’t.
 

Based on this, they suggest reframing messaging to focus on how we as a society / species are always evolving and progressive forwards, and that evolving beyond animal farming is something we can do, should do, and already are doing. They also suggest refocusing strategy around this - eg. focusing on advocacy for pro-animal policies, as opposed to asking consumers to make individual changes to their food choices.


(This will appear in this week's forum summary. If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on EA & LW Forum Summaries (9th Jan to 15th Jan 23') · 2023-01-18T18:59:55.648Z · EA · GW

Great to hear :)

Comment by Zoe Williams (GreyArea) on Overview of the Pathogen Biosurveillance Landscape · 2023-01-18T01:21:47.871Z · EA · GW

Great read, thanks for posting! A quick heads up that many of the links in the table of contents are broken (either linking to start of post, or to non-existent websites).

Summary of this post, and the sequel post Technological Bottlenecks for PCR, LAMP, and Metagenomics Sequencing (feel free to suggest edits!)
Biosurveillance systems help early identification of pathogens that could cause pandemics. The authors weighted existing methods on 10 criteria including usefulness, quality of evidence, feasibility and potential risks.


High scoring methods included: Point-of-person (non-lab tests eg. rapid antigen), clinical (lab tests eg. PCR), digital (reporting cases to a database), and environmental methods (eg. monitoring in wastewater). Technological developments in point-of-person and clinical surveillance (ie. faster, easier, cheaper, home-based tests) is seen as promising. Environmental surveillance would benefit from increasing sensitivity of wastewater testing equipment, and developing new concentration techniques that work for a wide variety of pathogens (bacteria, virus, fungi). Specific bottlenecks and potential solutions (eg. improving performance of LAMP, a cheaper PCR alternative, under cold temperatures) are discussed in the second post.

Slightly lower scoring methods were: animal (frequent sampling and wearable devices) and syndromic (monitoring symptoms). Data sharing between key parties (and preferably cross-country) could assist with syndromic and digital methods. Animal monitoring is less promising as, while 60% of known infectious diseases are zoonotic, we lack the capability to predict virulence and transmissibility to humans.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on EA career guide for people from LMICs · 2022-12-15T23:50:45.783Z · EA · GW

Post summary (feel free to suggest edits!):
The authors broadly recommend the following for EAs from low and middle income countries (LMICs):

  • Build career capital early on
  • Work on global issues over local ones, unless clear reasons for the latter
  • Some individuals to do local versions of: community building, priorities research, charity-related activities, or career advising

They discuss pros, cons, and concrete next steps for each. Individuals can use the scale / neglectedness / tractability framework, marginal value, and personal fit to assess options. They suggest looking for local comparative advantage at global priorities, and taking the time to upskill and engage deeply with EA ideas before jumping into direct work.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on CEEALAR: 2022 Update · 2022-12-14T23:09:10.907Z · EA · GW

Post summary (feel free to suggest edits!):
The Centre for Enabling EA Learning & Research (CEEALAR) is an EA hotel that provides grants in the form of food and accommodation on-site in Blackpool, UK. They have lots of space and encourage applications from those wishing to learn or work on research or charitable projects in any cause area. This includes study and upskilling with the intent to move into those areas.

Since opening 4.5 years ago, they’ve supported ~100 EAs with their career development, and hosted another ~200 visitors for events / networking / community building. It costs CEEALAR ~£800/month to host someone - including free food, logistics, and project guidance. This is ~13% the cost of an established EA worker, and an example of hits-based giving.

They have plans to expand, and are fixing up a next door property that will increase capacity by ~70%. They welcome donations, though aren’t in imminent need (they have 12 - 20 months of runway, depending on factors covered in the post). They’re also looking for a handy-person.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Cryptocurrency is not all bad. We should stay away from it anyway. · 2022-12-14T03:50:18.274Z · EA · GW

Post summary (feel free to suggest edits!):
The author argues that “the crypto industry as a whole has significant problems with speculative bubbles, ponzis, scams, frauds, hacks, and general incompetence”, and that EA orgs should avoid being significantly associated with it until the industry becomes stable.

In the last year, at least 4 crypto firms collapsed, excluding FTX. Previous downturns have included the collapse of the largest at the time crypto exchange mt gox. Crypto’s use is dominated by people using it to get rich - after 14 years, there are almost no widespread uses outside of this. This all means it’s a speculative bubble, and it will likely collapse again (maybe not in the same way). If we’re associated with it this could lead to a negative reputation that EA “keeps getting scammed”.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on r.i.c.e.'s neonatal lifesaving partnership is funded by GiveWell; a description of what we do · 2022-12-10T07:50:18.942Z · EA · GW

Post summary (feel free to suggest edits!):
r.i.c.e collaborates with the Government of Uttar Pradesh and an organization in India to promote Kangaroo Mother Care (KMC), which is a well-established tool for increasing survival rates of low birth weight babies. They developed a public-private partnership to cause the government’s KMC guidelines to be implemented cost-effectively in a public hospital.

Their best estimate based on a combination of implementation costs and pre-existing research is that it costs ~$1.8K per life saved. However they are unsure and are planning to compare survival rates in the hospital targeted vs. others in the region next year.

Both Founders Pledge and GiveWell have made investments this year. They welcome further support - you can donate here. Donations will help maintain the program, scale it up, do better impact evaluation, and potentially expand to other hospitals if they find good implementation partners.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Smallpox eradication · 2022-12-10T07:29:53.109Z · EA · GW

Post summary (feel free to suggest edits!):
Smallpox was confirmed as eradicated on December 9th, 1979. Our World in Data has a great explorer on its history and how eradication was achieved.

Smallpox killed ~300 million people in the 20th century alone, and is the only human disease to have been completely eradicated. It also led to the first ever vaccine, after Edward Jenner demonstrated that exposure to cowpox - a related but less severe disease - protected against smallpox. In the 19th and 20th centuries, further improvements were made to the vaccine. In 1959, the WHO launched a global program to eradicate smallpox, including efforts to vaccinate (particularly those in contact with infected individuals - ‘ring vaccination’), isolate those infected, and monitor spread. They eventually contained the virus primarily to India (86% of cases were there in 1974), and with a final major vaccination campaign, dropped cases there to zero in 1976.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Binding Fuzzies and Utilons Together · 2022-12-09T10:09:03.271Z · EA · GW

Post summary (feel free to suggest edits!):
Some interventions are neglected because they have less emotional appeal. EA typically tackles this by redirecting more resources there. The authors suggest we should also tackle the cause, by designing marketing to make them more emotionally appealing. This could generate significant funding, more EA members, and faster engagement.

As an example, the Make-A-Wish website presents specific anecdotes about a sick child, while the Against Malaria Foundation website focuses on statistics. Psychology shows the former is more effective at generating charitable behavior.

Downsides include potential organizational and personal value drift, and reduction in relative funding for Longtermist areas if these are harder to produce emotional content for. They have high uncertainty and suggest a few initial research directions that EAs with a background in psychology could take to develop this further.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by GreyArea on [deleted post] 2022-12-09T09:37:38.711Z

Post summary (feel free to suggest edits!):
AI startups can be big money-makers, particularly as capabilities scale. The author argues that money is key to AI safety, because money:

  • Can convert into talent (eg. via funding AI safety industry labs, offering compute to safety researchers, and funding competitions, grants, and fellowships). Doubly so if the bottleneck becomes engineering talent and datasets instead of creative researchers.
  • Can convert into influence (eg. lobbying, buying board seats, soft power).
  • Is flexible and always useful.

The author thinks another $10B AI company would be unlikely to counterfactually accelerate timelines by more than a few weeks, and that money / reduced time to AGI tradeoff seems worth it. They also argue that the transformative potential of AI is becoming well-known, and now is the time to act to benefit from our foresight on it. They’re looking for a full-stack developer as a cofounder.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on SoGive Grants: a promising pilot. Our reflections and payout report. · 2022-12-09T09:20:16.030Z · EA · GW

Post summary (feel free to suggest edits!):
SoGive is an EA-aligned research organization and think tank. In 2022, they ran a pilot grants program, granting £223k to 6 projects (out of 26 initial applicants):

  • Founders Pledge - £93,000 - to hire an additional climate researcher.
  • Effective Institutions Project - £62,000 - for a regranting program.
  • Doebem - £35,000 - a Brazillian effective giving platform, to continue scaling.
  • Jack Davies - £30,000 - for research improving methods to scan for neglected X-risks.
  • Paul Ingram - £21,000 - poll how nuclear winter info affects nuclear armament support.
  • Social Change Lab - £18,400 - 2xFTE for 2 months, researching social movements.

The funds were sourced from private donors, mainly people earning to give. If you’d like to donate, contact isobel@sogive.org.

They advise future grant applicants to lay out their theory of change (even if their project is one small part), reflect on how you came to your topic and if you’re the right fit, and consider downside risk.

The give a detailed review of their evaluation process, which was heavy touch and included a standardized bar to meet, ITN+ framework, delivery risks (eg. is 80% there 80% of the good?), and information value of the project. They tentatively plan to run it again in 2022, with a lighter touch evaluation process (extra time didn’t add much value).

They also give reflections and advice for others starting grant programs, and are happy to discuss this with anyone.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)
 

Comment by Zoe Williams (GreyArea) on Thoughts on AGI organizations and capabilities work · 2022-12-09T08:50:08.868Z · EA · GW

Thanks, and that makes sense, edited to reflect your suggestion

Comment by Zoe Williams (GreyArea) on Thoughts on AGI organizations and capabilities work · 2022-12-09T06:14:01.646Z · EA · GW

Post summary (feel free to suggest edits!):
Rob paraphrases Nate’s thoughts on capabilities work and the landscape of AGI organisations. Nate thinks: 

  1. Capabilities work is a bad idea, because it isn’t needed for alignment to progress and it could speed up timelines. We already have many ML systems to study, which our understanding lags behind. Publishing that work is even worse.
  2. He appreciates OpenAI’s charter, openness to talk to EAs / rationalists, clearer alignment effort than FAIR or Google Brain, and transparency about their plans. He considers DeepMind and Anthropic on par and slightly ahead respectively on taking alignment seriously.
  3. OpenAI, Anthropic, and DeepMind are unusually safety-conscious AI capabilities orgs (e.g., much better than FAIR or Google Brain). But reality doesn't grade on a curve, there's still a lot to improve, and they should still call a halt to mainstream SotA-advancing potentially-AGI-relevant ML work, since the timeline-shortening harms currently outweigh the benefits.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Learning from non-EAs who seek to do good · 2022-12-09T01:16:49.173Z · EA · GW

Post summary (feel free to suggest edits!):
The author asks whether EA aims to be a question about doing good effectively, or a community based around ideology. In their experience, it has mainly been the latter, but many EAs have expressed they’d prefer it be the former.

They argue the best concrete step toward EA as a question would be to collaborate more with people outside the EA community, without attempting to bring them into the community. This includes policymakers on local and national levels, people with years of expertise in the fields EA works in, and people who are most affected by EA-backed programs.

Specific ideas include EAG actively recruiting these people, EA groups co-hosting more joint community meetups, EA orgs measuring preferences of those impacted by their programs, applying evidence-based decision-making to all fields (not just top cause areas), engaging with people and critiques outside the EA ecosystem, funding and collaborating with non-EA orgs (eg. via grants), and EA orgs hiring non-EAs.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Why development aid is a really exciting field · 2022-12-08T20:37:13.233Z · EA · GW

Edited, thanks :)

Comment by Zoe Williams (GreyArea) on Promoting compassionate longtermism · 2022-12-08T09:17:24.157Z · EA · GW

Post summary (feel free to suggest edits!):
Some suffering is bad enough that non-existence is preferable. The lock-in of uncompassionate systems (eg. through AI or AI-assisted governments) could cause mass suffering in the future.

OPIS (Organisation for the Prevention of Intense Suffering) has until now worked on projects to help ensure that people in severe pain can get access to effective medications. In future, they plan to “address the very principles of governance, ensure that all significant causes of intense suffering receive adequate attention, and promote strategies to prevent locked-in totalitarianism”. One concrete project within this is a full length film to inspire people with this vision and lay out actionable steps. They’re looking for support in the form of donations and / or time.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Revisiting EA's media policy · 2022-12-08T01:19:11.412Z · EA · GW

Post summary (feel free to suggest edits!):
CEA follows a fidelity model of spreading ideas, which claims because EA ideas are nuanced and the media often isn’t, media communication should only be done by those qualified who are confident the media will report the ideas exactly as stated.

The author argues against this on four points:

  1. Sometimes many people doing something ‘close to’ is better than few doing it ‘exactly’ eg. few vegans vs. many reductitarians.
  2. If you don’t actively engage the media, a large portion of coverage will be from detractors, and therefore negative.
  3. EA’s core ideas are not that nuanced. Most critics have a different emotional response or critique how it’s put into practice, rather than get anything factually wrong.
  4. The fidelity model contributes to hero worship and concentration of power in EA.

The author suggests further discussion on this policy, acknowledgement from CEA of the issues with it, experimenting with other approaches in low-risk settings, and historical / statistical research into what approaches have worked for other groups.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on The Spanish-Speaking Effective Altruism community is awesome · 2022-12-08T01:16:51.519Z · EA · GW

Post summary (feel free to suggest edits!):
Since Sandra Malagón and Laura González were funded to work on growing the Spanish-speaking EA community, it’s taken off. There have been 40 introductory fellowships, 2 university groups started, 2 camps, many dedicated community leaders, translation projects, 7-fold activity on Slack vs. 2020, and a community fellowship / new hub in Mexico City. If you’re keen to join in, the slack workspace is here, and anyone (English or Spanish speaking) can apply to EAGxLatAm.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Why development aid is a really exciting field · 2022-12-08T01:13:41.198Z · EA · GW

Post summary (feel free to suggest edits!):
Wealthy countries spend a collective $178B on development aid per year - 25% of all giving worldwide. Some aid projects have been cost-effective on a level with Givewell’s top recommendations (eg. PEPFAR), while others have caused outright harm.

Aid is usually distributed via a several step process:

  1. Decide to spend money on aid. Many countries signed a 1970 UN resolution to spend 0.7% of GNI on official development assistance.
  2. Government decides a general strategy / principles.
  3. Government passes a budget, assigning $s to different aid subcategories.
  4. The country’s aid agency decides on projects. Sometimes this is donating to intermediaries like the UN or WHO, sometimes it’s direct.
  5. Projects are implemented.

This area is large scale, tractability is unsure but there are many pathways and some past successes (eg. a grassroots EA campaign in Switzerland increased funding, and the US aid agency ran a cash-benchmarking experiment with GiveDirectly), and few organisations focus on this area compared to the scale.

The author and their co-founder have been funded to start an organization in this area. Get in touch if you’re interested in Global Development and Policy.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Visualizing the development gap · 2022-12-08T01:11:44.149Z · EA · GW

Post summary (feel free to suggest edits!):
The US poverty threshold, below which one qualifies for government assistance, is $6625 per person for a family of four. In Malawi, one of the world’s poorest countries, the median income is a twelfth of that (adjusted for purchasing power). Without a change in growth rates, it will take Malawi almost two centuries to catch up to where the US is today.

This example illustrates the development gap: the difference in living standards between high and low income countries. Working on this is important both for the wellbeing of those alive today, and because it allows more people to participate meaningfully in humanity’s most important century and therefore help those in the future too.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Update on Harvard AI Safety Team and MIT AI Alignment · 2022-12-06T09:23:33.377Z · EA · GW

Good to know, cheers - will update in the summary posts and emails to include all authors.

Comment by Zoe Williams (GreyArea) on The Founders Pledge Climate Fund at 2 years · 2022-12-05T23:03:50.249Z · EA · GW

Post summary (feel free to suggest edits!):
The Founders Pledge Climate Fund has run for 2 years and distributed over $10M USD. 

Because the climate-space has ~$1T per year committed globally, the team believes the best use of marginal donations is to correct existing biases of overall climate philanthropy, fill blindspots and leverage existing attention on climate. The Fund can achieve this more effectively than individual donations because it can make large grants to allow grantees to start new programs, quickly respond to time-sensitive opportunities, and make catalytic grants to early-stage organizations who don’t yet have track records.

Examples include substantial increase in growth of grantee Clean Air Task Force, and significant investments into emerging economies that get less from other funders.

Future work will look at where best to focus policy efforts, and the impact of the Russo-Ukrainian war on possible policy windows.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Update on Harvard AI Safety Team and MIT AI Alignment · 2022-12-05T22:48:06.324Z · EA · GW

Post summary (feel free to suggest edits!):
Reflections from an organizer of the student organisations Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA).

Top things that worked:

  • Outreach focusing on technically interesting parts of alignment and leveraging informal connections with networks and friend groups.
  • HAIST office space, which was well-located and useful for programs and coworking.
  • Leadership and facilitators having had direct experience with AI safety research.
  • High-quality, scalable weekly reading groups.
  • Significant time expenditure, including mostly full-time attention from several organizers.

Top things that didn’t work:

  • Starting MAIA programming too late in the semester (leading to poor retention).
  • Too much focus on intro programming.

In future, they plan to set up an office space for MAIA, share infrastructure and resources with other university alignment groups, and improve programming for already engaged students (including opportunities over winter and summer break). 

They’re looking for mentors for junior researchers / students, researchers to visit during retreats or host Q&As, feedback, and applicants to their January ML bootcamp or to roles in the Cambridge Boston Alignment Initiative

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Why Giving What We Can recommends using expert-led charitable funds · 2022-12-05T21:50:02.439Z · EA · GW

Post summary (feel free to suggest edits!):

Funds allow donors to give as a community, with expert grantmakers and evaluators directing funds as cost-effectively as possible. Advantages include that the fund can learn how much funding an organization needs, provide it when they need it, monitor how it’s used, and incentivize them to be even more impactful. It also provides a reliable source of funding and support for those organisations.


GWWC recommends most donors give to funds, with the exception of those who have unique donation opportunities that funds can’t access, or who believe they can identify more cost-effective opportunities themselves (eg. due to substantial expertise, or differing values to existing funds). You can find their recommended funds here.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on The deathprint of replacing beef by chicken and insect meat · 2022-12-05T21:42:52.629Z · EA · GW

Post summary (feel free to suggest edits!):
A recent study (Bressler, 2021) estimated that for every 4000 ton CO2 emitted today, there will be one extra premature human death before 2100. The post author converts this into human deaths per kilogram of meat produced (based on CO2 emissions for that species), and pairs this with the number of animals of that species that need to be slaughtered to produce 1kg of meat. 

After weighting by neurons per animal, their key findings are below:

This suggests switching from beef to chicken or insect meat reduces climate change but increases animal suffering significantly, so might be bad overall. They suggest prioritizing a reduction of chicken meat consumption, and that policy makers stop subsidizing research on insect meat, tax meat based on climate and suffering externalities, and start subsidizing plant / cell based meat.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost) · 2022-12-05T20:37:31.930Z · EA · GW

Post summary (feel free to suggest edits!):
Linkpost to an article by Rohit Krishnan, a former hedge fund manager. Haydn highlights key excerpts, including one claiming that “This isn’t Enron, where you had extremely smart folk hide beautifully constructed fictions in their publicly released financial statements. This is Dumb Enron, where someone “trust me bro”-ed their way to a $32 Billion valuation.”

They mention that “the list of investors in FTX [was] a who’s who of the investing world” and while “VCs don’t really do forensic accounting” there were still plenty of red flags they should have checked. Eg. basics like if FTX had an accountant, management team, back office, board, lent money to the CEO, or how intertwined FTX and Alameda were. The author has had investments 1/10th the size of what some major investors had in FTX, and still required a company audit, with most of these questions taking “half an hour max”.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on "Insider EA content" in Gideon Lewis-Kraus's recent New Yorker article · 2022-12-05T20:36:33.017Z · EA · GW

Post summary (feel free to suggest edits!):
Linkpost and key excerpts from a New Yorker article overviewing how EA has reacted to SBF and the FTX collapse. The article claims there was an internal slack channel of EA leaders where a warning that SBF “has a reputation [in some circles] as someone who regularly breaks laws to make money” was shared, before the collapse.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Some notes on common challenges building EA orgs · 2022-12-05T20:34:15.024Z · EA · GW

Post summary (feel free to suggest edits!):
The author’s observations from talking to / offering advice to several EA orgs:

  • Many orgs skew heavily junior, and most managers and managers-of-managers are in that role for the first time.
  • Many leaders are isolated (no peers to check in with) and / or reluctant (would prefer not to do people management).

They suggest solutions of:

  • Creating an EA manager's slack (let them know if you’re interested!)
  • Non-EA management/leadership coaches - they haven't found most questions they get in their coaching are EA-specific.
  • More orgs hire a COO to take over people management from whoever does the vision / strategy / fundraising.
  • More orgs consider splitting management roles into separate people management and technical leadership roles.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Come get malaria with me? · 2022-12-05T20:33:04.331Z · EA · GW

Post summary (feel free to suggest edits!):
There is a paid opportunity to be part of a Malaria vaccine trial in Baltimore from January to early March. The vaccine has a solid chance of being deployed for pregnant women if it passes this challenge trial. It’s ~55 hours time commitment if in Baltimore or more if needing to travel, and the risk of serious complications is very low. The author signed up, and knows 6 others who have expressed serious interest. Get in touch with questions or to join an AirBnB the author is setting up for it.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on 2022 ALLFED highlights · 2022-12-05T20:31:51.399Z · EA · GW

Post summary (feel free to suggest edits!):
Highlights for ALLFED in 2022 include:

  • Submitted 4 papers to peer review (some now published)
  • Started to develop country-level preparedness and response plans for Abrupt Sunlight Reduction Scenarios (US plan completed).
  • Worked on financial mechanisms for food system interventions, including superpests, climate food finance nexus, and pandemic preparedness. 
  • Delivered briefings to several NATO governments and UN agencies on global food security, nuclear winter impacts, policy considerations and resilience options.
  • Appeared in major media outlets such as BBC Future and The Times.
  • Improved internal operations, including registering as a 501(c)(3) non-profit.
  • Delivered 20+ presentations and attended 30+ workshops / events / conferences.
  • Hired 6 research associates, 4 operations roles, 5 interns, and 42 volunteers.

ALLFED is funding constrained and gratefully appreciates any donations. The heightened geopolitical tensions from the Russo-Ukrainian conflict create a time-limited policy window for bringing their research on food system preparedness to the forefront of decision makers’ minds.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Altruistic kidney donation in the UK: my experience · 2022-12-05T20:28:00.369Z · EA · GW

Awesome, thanks for doing this!

Post summary (feel free to suggest edits):
Around 250 people on the UK kidney waiting list die each year. Donating your kidney via the UK Living Kidney Sharing Scheme can potentially kick off altruistic chains of donor-recipient pairs ie. multiple donations. Donor and recipient details are kept confidential.

The process is ~12-18 months and involves consultations, tests, surgery, and for the author 3 days of hospital recovery. In a week since discharge, most problems have cleared up, they can slowly walk several miles, and they encountered no serious complications.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Beware frictions from altruistic value differences · 2022-12-05T20:25:47.209Z · EA · GW

Post summary (feel free to suggest edits!):
Differing values creates risks of uncooperative behavior within the EA community, such as failing to update on good arguments because they come from the “other side”, failing to achieve common moral aims (eg. avoiding worst case outcomes), failing to compromise, or committing harmful acts out of spite / tribalism.

The author suggests mitigating these risks by assuming good intent, looking for positive-sum compromises, actively noticing and reducing our tendency to promote / like our ingroup more, and validating that the situation is challenging and it’s normal to feel some tension.

(If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Comment by Zoe Williams (GreyArea) on Introducing the Animal Advocacy Bi-Weekly Digest (Nov 4 - Nov 18) · 2022-11-24T20:39:06.444Z · EA · GW

Ah cheers, that makes sense - I'll update in the forum summary post too.

Comment by Zoe Williams (GreyArea) on EA & LW Forums Weekly Summary (10 - 16 Oct 22') · 2022-11-03T08:33:49.031Z · EA · GW

That's great to hear, thank you :)

Comment by Zoe Williams (GreyArea) on EA & LW Forums Weekly Summary (17 - 23 Oct 22') · 2022-10-31T02:43:17.125Z · EA · GW

Great to hear, and thanks for the clarification.

Comment by Zoe Williams (GreyArea) on EA & LW Forums Weekly Summary (19 - 25 Sep 22') · 2022-09-29T18:42:23.250Z · EA · GW

Thanks! Fixed

Comment by Zoe Williams (GreyArea) on EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) · 2022-09-04T10:02:45.491Z · EA · GW

Ah yep, good idea - I'll do that for next week's :)

Comment by Zoe Williams (GreyArea) on EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) · 2022-08-30T11:44:03.371Z · EA · GW

Awesome, glad to hear :-)

Ah good point - that's one I decided partway not to summarize but forgot to move it out of the section.

Comment by Zoe Williams (GreyArea) on EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) · 2022-08-30T11:19:40.333Z · EA · GW

Thanks for the feedback!

having this available as a podcast (read by a human) would be cool

That would be awesome - I don't have time to make one myself, but if anyone else wants to take the post and make it into something like that feel free.

this sentence is confusing to me: "Due to this, he concludes the cause area is one of the most important LT problems and primarily advises focusing on other risks due to neglectedness." - is it missing a "not"?

Good point - I've re-read the conclusion and changed that line to be a bit clearer. It now reads:  "Due to this, he concludes that climate change is still an important LT area - though not as important as some other global catastrophic risks (eg. biorisk), which outsize on both neglectedness and scale."
 

Comment by GreyArea on [deleted post] 2022-02-03T09:51:35.796Z

It’s a good point, there’s often cases for discounting in a lot of decisions where we’re weighing up value. It’s usually done for two reasons – one being uncertainty, so we’re less certain of stuff in the future and therefore our actions might not do what we expect or the reward we’re hoping for might not actually happen. And the second being only relevant to financial stuff, but given inflation – and that you’re likely to have more income the older you are - the money’s real value is more now than later. 

The second reason doesn’t really apply here because happiness doesn’t decrease in value as you go through generations, like your happiness doesn’t matter less than your parents or grandparents did, even though $5 now means less than $5 then. The first reason is interesting because there is a lot of uncertainty in the future. And for some of our actions this means we should discount their expected effects, like they might not do what we expect, but that doesn’t mean the people itself are of less value – just that we’re not as sure how to help them. I think the actions we can be most sure of helping them are things that reduce risks in the short-term future, because if everything goes to crap or we all die that’s pretty sure to be negative for them. But uncertainty on the people themselves would look like – ‘I know how to help these guys, but I’m not sure I want to, like I’m not sure they’ll be people worth helping’. Personally I think I might care about them more, given every generation so far has had advances in the way they treat others, I like you already but I reckon I might like us even better if we’d grown up 5000 years from now!

Comment by Zoe Williams (GreyArea) on The career questions thread · 2015-06-29T21:00:01.761Z · EA · GW

What type of arts do you enjoy? For instance, I always really enjoyed English and drama, and am now in a data science job where I am going to be writing up publications and doing talks in addition to my coding/stats work. If you go for a small or start-up company, you can often have a broader job like this where you can take on tasks that interest you - my perception is that larger companies tend to have more regimented roles.

If you're more into visual arts, web design, marketing or some sort of community-building/social logistics could be good options. They'd also provide good skills in short supply to volunteer to the EA community.

Comment by Zoe Williams (GreyArea) on I am Samwise [link] · 2015-01-09T05:56:04.521Z · EA · GW

I see the hero as the one pushing innovative new strategies for world-changing (eg. starting a business in that area, like Givewell - specifics subject to what changes the hero wants to make), while the sidekicks are the ones that help out by being employed in that business (in a non-directing role) or donating to it or providing moral support etc. - they help what's already been created do better, and thus have to choose from people/causes that already exist rather than creating their own.

Comment by Zoe Williams (GreyArea) on Please vote for our video on Deworm the World in this online poll. · 2014-12-14T04:56:25.882Z · EA · GW

Great job getting something up for this guys. If we wanted to improve it for next year, I'd suggest interspersing some footage of the work being done and putting key points like the cost to deworm a kid, the amount that've been helped and the benefits of deworming right at the start of the video (literally in the first 20 seconds if possible) and in text overlay so everyone sees them (a lot of people I've talked to who have watched P4A vids only watch the very start of them). Since it's a yearly thing the time investment now could pay off, and starting with those big facts to grab attention then winding into an introduction of where the research comes from and more description could work well.

Also, does anyone know how many of the top voted charities the money gets split between? Is it the top three, or top ten, or...? And how it's split (proportional to votes or equally)?

Comment by Zoe Williams (GreyArea) on Ideas for new experimental EA projects you could fund! · 2014-12-02T23:14:05.776Z · EA · GW

Are there currently any posters/brochures for EA, Givewell, GWWC etc.?

Edit: thanks guys, glad to know these exist. Will probably print a few to dot around my university.

Comment by Zoe Williams (GreyArea) on Your Good Deeds 2014 Thread · 2014-10-01T21:51:49.977Z · EA · GW

I donated a couple hundred dollars to GiveDirectly myself, my brother donated another $75 for my birthday, and have taken up the idea of a charity jar from LessWrong a month or so ago (putting money in it every time I need an emotional boost or turn down donating to collectors on the street/donating to projects like HabitRPG) - at the end of the year it's all going to an as-yet-undecided effective charity.

I also wrote my final psyc assignment (where we can research anything related to judgement and decision-making and do a write-up or pilot study) on the charitable giving process, how people choose charities, and how we can make it more effective. Probably not new ground covered for EAs but I might post it. I discussed this a lot with friends and tutors, so hopefully that'll make some of them interested in EA too - I've had at least a few ask more about Givewell and look it up.