Good to see more ideas on new charities.
Could you provide more details on this example idea:
Banning harmful practices (like genetic modification)
Charity Entrepreneurship produced a report on welfare focused gene modification back in 2019. Has there been a change of mind since then?
The donor is anonymous.
From the Wired article: "The temporary exhibit is funded until May by an anonymous donor..."
Thanks for all the comments.
Updated the post with a recent tweet from Sam Altman, CEO of OpenAI:
"recalibrate" means "increase" obviously.
disappointing to see this six-week development. openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.
John Culver's How We Would Know When China Is Preparing to Invade Taiwan is also worth reading.
China’s political strategy for unification has always had a military component, as well as economic, informational, legal, and diplomatic components. Most U.S. analysis frames China’s options as a binary of peace or war and ignores these other elements. At the same time, many in Washington believe that if Beijing resorts to the use of force, the only military option it would consider is invasion. This is a dangerous oversimplification. China has many options to increase pressure on Taiwan, including military options short of invasion—limited campaigns to seize Taiwan-held islands just off China’s coast, blockades of Taiwan’s ports, and economic quarantines to choke off the island’s trade. Lesser options probably could not compel Taiwan’s capitulation but could further isolate it economically and politically in an effort to raise pressure on the government in Taipei and induce it to enter into political negotiations on terms amenable to Beijing.
An all-out invasion would be detected months in advance:
Any invasion of Taiwan will not be secret for months prior to Beijing’s initiation of hostilities. It would be a national, all-of-regime undertaking for a war potentially lasting years.
You're welcome! Thanks for donating
Ray Dalio also has a TisBest $50 charity gift card page: https://www.tisbest.org/rg/ray/
This describes three utopias. It makes sense to have several since everyone has differing definitions of utopia.
The 'Psychonauts' sound like the Hedonistic Imperative version of utopia:
The Psychonauts had formed the second most popular cluster. They endorsed hedonism as a theory of value, believing that the purpose of life is the elimination of suffering and the enjoyment of bliss.
Hedonistic Imperative - David Pearce. Eradicating suffering through biotechnology and paradise engineering.
Toby Ord has written about the affectable universe, the portion of the universe that “humanity might be able to travel to or affect in any other way.”
I’m curious whether anyone has written about the affectable universe in terms of time.
- We can only affect events in the present and the future
- Events are always moving from the present (affectable) to the past (unaffectable)
- We should intervene in present events (e.g. reduce suffering) before these events move to the unaffectable universe
Thanks for your post, great advice.
Please ensure you include the book's title, author, and year/edition, as well as any other information requested by the library. If you're a university group organiser, it's likely helpful to note that you're with a university student group.
Maybe include the ISBN as well. For academic libraries, it's also helpful to say which students the book is relevant for. Peter Singer's books would be relevant for the Arts students studying philosophy, for example. Academic libraries can buy some extracurricular resources, but most of the budget is for course-relevant resources.
It's important to actually use the books after they arrive! Libraries will look at metrics like the number of times a book is borrowed, the number of unique borrowers, date it was last borrowed, etc. Books that don't get used will eventually be weeded out of the collection. Books that are borrowed a lot may justify multiple copies.
For community building, there's the International Suffering Abolitionists group. It hosts meetups, a Discord server and a section of EA Gather Town.
"Invincible Wellbeing is a research organisation whose mission is to promote research targeting the biological substrates of suffering."
Appears on the 80,000 Hours Job Board
(Edit: Accidentally posted a duplicate link.)
Aligned with whom? by Anton Korinek and Avital Balwit (2022) has a possible answer. They write that an aligned AI system should have
- direct alignment with its operator, and
- social alignment with society at large.
Some examples of failures in direct and social alignment are provided in Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek, 2021).
We could expand the moral circle further by aligning AI with the interests of both human and non-human animals. Direct, social and sentient alignment?
As you mentioned, these alignments present conflicting interests that need mediation and resolution.
The Tech Worker Handbook website has more information about Non-Disclosure Agreements (NDAs). It also cautions people from reading the website on a company device:
I do NOT advise accessing this from a company device. Your employer can, and will likely, track visits to a resource like this Handbook.
Business Insider's review of 36 NDAs in the tech industry:
Some NDAs say explicitly that the confidentiality provisions never sunset, effectively making them lifelong agreements...
More than two-thirds of workers who shared their agreements with Insider said they weren’t exactly sure what the documents prevented them from saying—or whether even sharing them was a violation of the agreement itself.
Thank you for posting this. Appreciate reading about his life and legacy.
Are there any links to the translations of David Pearce's works?
Thanks for the comment.
It would focus on species that have the capacity for suffering and enjoyment, so not all species.
I agree it is a hugely ambitious project. Megaprojects are within the scope of EA and its funders.
If most wild animal lives have negative wellbeing, I think this kind of intervention would be preferable to the status quo or extinction.
Thanks, I completely agree. David Pearce is the founder of this line of thought: editing and rewriting nature to reduce and eliminate involuntary suffering.
I have added a quotation to the post:
Like saving the drowning child in Singer’s thought experiment, now that gene drive technology is available, there is a choice between doing nothing and intervening to do good.
"In the post-CRISPR era, whether intelligent agents decide to preserve, reform, or phase out the biology of involuntary suffering will be an ethical choice."
David Pearce, Compassionate Biology
Many thanks for writing this essay. The history of technological restraint is fascinating. I never knew that Edward Teller wanted to design a 10-gigaton bomb.
Something I have noticed in history is that advocates of technological restraint are often labelled luddites or luddite supporters. Here's an example from 2016:
Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award
After a month-long public vote, the Information Technology and Innovation Foundation (ITIF) today announced that it has given its annual Luddite Award to a loose coalition of scientists and luminaries who stirred fear and hysteria in 2015 by raising alarms that artificial intelligence (AI) could spell doom for humanity. ITIF argued that such alarmism distracts attention from the enormous benefits that AI can offer society—and, worse, that unnecessary panic could forestall progress on AI by discouraging more robust public and private investment.
“It is deeply unfortunate that luminaries such as Elon Musk and Stephen Hawking have contributed to feverish hand-wringing about a looming artificial intelligence apocalypse,” said ITIF President Robert D. Atkinson. “Do we think either of them personally are Luddites? No, of course not. They are pioneers of science and technology. But they and others have done a disservice to the public—and have unquestionably given aid and comfort to an increasingly pervasive neo-Luddite impulse in society today—by demonizing AI in the popular imagination.”
“If we want to continue increasing productivity, creating jobs, and increasing wages, then we should be accelerating AI development, not raising fears about its destructive potential,” Atkinson said. “Raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption. The obvious irony here is that it is hard to think of anyone who invests as much as Elon Musk himself does to advance AI research, including research to ensure that AI is safe. But when he makes inflammatory comments about ‘summoning the demon,’ it takes us two steps back.”
The list of people the ITIF wanted to call luddites included "Advocates seeking a ban on 'killer robots'", probably the Campaign to Stop Killer Robots.
I wonder what the ITIF's position is on Teller's 10-gigaton bomb.
Lack of access to the incorporated standards, since the standards often cost hundreds of dollars each to access.
Not only are many standards expensive, but they often include digital rights management that make them cumbersome to access and open.
In Australia, access to standards is controlled by private companies that can charge whatever they like. There's currently a petition to the Australian parliament with 22,526 signatures requesting free or affordable access to Australian Standards, including standards mandated by legislation. Across the ditch, the New Zealand government has set a great example by funding free access to building standards.
It's important for AI safety standards to be open access from the start.
- Support national laws and agencies regulating advanced AI. For the US, see this bipartisan bill and this proposal to establish a federal agency.
Translation is a great idea.
It was one of the winners of the Future Fund’s Project Ideas Competition, and it's now listed on the project ideas page.
A problem unique to Chinese content is to ensure that it doesn't get blocked by their internet censorship policy.
Thanks for bringing up the idea of case studies.
It would also be useful to study verification, compliance and enforcement of these regulations: "Trust, but verify."
A few suggestions for next steps:
- Support investigative journalism into AI progress and safety. Something similar to this, but for AI: Bankman-Fried Family Donates $5 Million to ProPublica: Grant will support reporting on biosecurity and pandemic preparedness.
- Support non-governmental organizations that campaign for international laws and treaties regulating AI. The regulation of autonomous weapons might be a good starting point. See the Campaign to Stop Killer Robots and Lethal Autonomous Weapons.
Thanks for your questions. Here are some thoughts:
[signalling or alarm system] would be a functional replacement, performing the same function as pain, but replacing suffering with information.
Is this something like rationality? Some individuals can learn by rational rather than emotional understanding. How can an individual's reasoning potential be known?
I think rationality would apply to both cases. Let's say you feel pain in your arm, you or your doctor would use rational methods to figure out what's wrong. The same thing would happen if a diagnostic tool or a gene-edited system notified you, without the pain signal, that there's something wrong with your arm. You would still use rationality to diagnose and fix the problem.
This would mean that suffering-reducing measures should be universal or could cause unintended suffering to non-participants.
I agree with you that suffering reduction should be universal. Effective altruism has really pushed the idea of overcoming bias in location, time and species.
It is implied that developing competence and survival is enjoyable, and more enjoyable than (painfully) dying very young. Is there any evidence for either of those claims?
The second chapter of the book focuses on r-strategists, but also states that "r-strategist infants aren't the only wild animals who experience a low level of welfare. Most (sentient) K-strategist animals and r-strategist adults endure a considerable amount of suffering from a variety of sources..."
completely eliminating suffering would decrease an animal’s capacity for positive experiences
What suggests that this is the case? A counter-example is that taking an analgesic does not eliminate one's ability to feel pleasure.
I agree. Joanne Cameron is also a good example of someone who doesn't feel pain and appears to have a normal capacity for positive experiences and happiness. The effects of eliminating pain or suffering on happiness is worth further study.
No worries. I think we have different definitions of the status quo, and that is affecting our interpretation of the survey results.
Your definition of the status quo is a form of independence: functional independence (or perhaps de facto independence). In which case, since all the survey results show that "Maintain status quo" is popular, means that independence is the most popular choice.
My definition of the status quo is something in-between unification and independence, like a third way. It's the "none of the above" choice, disapproving both unification and independence. If this definition is used, then all the survey results show that this position is the most popular choice.
It's a shame that the survey question doesn't actually define what the status quo is. The status quo changes over time too, so it's hard to pin down.
But perhaps that is what makes the status quo option so popular. It's a vague, undefined entity that can be interpreted whatever way you like.
Anyway, for completeness, here's the full survey question from the data collection methodology:
The independence-unification (TI-UM) position is constructed from the following survey item:
“Thinking about Taiwan-mainland relations, there are several differing opinions:
- unification as soon as possible;
- independence as soon as possible;
- maintain the status quo and move toward unification in the future;
- maintain the status quo and move toward independence in the future;
- maintain the status quo and decide in the future between independence or unification;
- maintain the status quo indefinitely.
Which do you prefer?”
In addition to these six attitudes, the trend chart also includes non-responses for a total of seven categories.
Thanks for your post! Good to see this issue in the EA Forum.
Regarding the statement that:
At this point, most people in Taiwan don’t consider themselves Chinese anymore and simply want to be their own nation instead, indefinitely.
Survey data supports your first point. The vast majority of people in Taiwan call themselves "Taiwanese" or "Both Taiwanese and Chinese":
Survey data doesn't support your second point though: "[most people in Taiwan] simply want to be their own nation instead, indefinitely". Most people in Taiwan support the status quo in various forms:
The most popular options are:
- Maintain status quo, decide at later date (28.4%)
- Maintain status quo indefinitely (27.3%)
- Maintain status quo, move toward independence (25.1%)
The survey question doesn't define what the status quo is, but it's definitely not independence, and it's definitely not unification. It's the grey area, the middle choice, between independence and unification.
The US uses strategic ambiguity to keep Taiwan with the status quo. It will support Taiwan as long as it doesn't declare formal independence and start a war.
Why is the status quo so popular? It means peace and prosperity, and it has been surprisingly stable over the last 70 years.
WHO published a report on malaria eradication (2020) that covers megatrends like climate change.
It is similar to other reports in recommending over $6 billion per year to meet targets.
The Lancet Commission on Malaria Eradication (2019) : "Malaria eradication is likely to cost over $6 billion per year. The world is already spending around $4.3 billion."
If eradication is achieved by 2040, that would be about $120 billion in total.
None mentioned in the report. It refers to the Methods section of an online appendix but the appendix doesn't appear to be on the website.
$90 to $120 billion:
“Any costing of a 25-year eradication effort is speculative and involves uncertainties that increase over time. Nonetheless, initial modeling suggests that the costs of eradicating malaria could be $90–$120 billion between 2015 and 2040.”
From Aspiration to Action (2015)
(Sorry, I didn't see your comment until now.)
Animal Ethics has some bibliographical lists: https://www.animal-ethics.org/bibliographical-lists/
Kyle Johannsen's book Wild Animal Ethics has extensive reference lists https://philpapers.org/rec/JOHWAE-2
Thanks very much!
Great feature! Just wondering whether Our World in Data charts can be embedded into Substack and Ghost in a similar way.
Here's a related question that may help: "What are the EA movement's most notable accomplishments?"
Charity Entrepreneurship has a report called "Welfare Focused Gene Modification" from March 2019 that mentions golden rice and other GMOs, mostly farm animal interventions. The report might be superseded though because it no longer appears on the website.
This is an interesting idea from the report: "A 'Good Gene Institute', similar to the Good Food Institute, that is focused on carefully and thoughtfully building public awareness and interest in individuals getting into the science of genetics-based animal issues."
Thanks for your post. There's a reasonable case for GMOs and malaria to be a cause area. Target Malaria is using genetic modification to reduce the population of malaria-transmitting mosquitoes.
Open Philanthropy writes, "It seems likely to us that the cost-effectiveness of this grant will be competitive with donations to the Against Malaria Foundation (though unlikely that it will be more than 10 times as cost-effective)" (Open Philanthropy, 2017).
An introductory reading list on wild animal welfare that covers all the important debates:
- Should we intervene in wild animal welfare?
- Will interventions work? Are they tractable?
- What impact will wild animal welfare have on the long-term future?
The post was part of a 2018 series, the Wild Animal Welfare Literature Library Project.
Wild animal welfare has increased in prominence since then, e.g. Animal Charity Evaluators has regularly identified wild animal welfare as a key cause area.
It was Isaac Asimov's favorite story of the hundreds of stories he has written.
I found the ending impossible to forget.
Very utopian. This is what could happen if everything goes right with AGI. The story doesn't cover all the things that could go wrong.
Thanks for your post!
Would an open access repository plus an open peer review system like PREreview or the Open Peer Review Module meet your needs?
Also, is there a need to create an open access multidisciplinary repository (green open access) for effective altruism researchers? Or is the existing network of repositories enough?
Many thanks to you and everyone for organising and funding this contest.
If anyone is interested in a sequel, November is National Novel Writing Month (NaNoWriMo). The goal is to write 50,000 words in 30 days.
A good opportunity to expand that short story into a novel!
Thanks for creating this comprehensive list!
For the wild animal suffering section, there’s a book by Kyle Johannsen that covers the ethics of intervention:
The timelines do a great job of visualising how colonisation would be completed quickly on a cosmic timescale.
There was also a memorable visualisation in Scientific American depicting how space colonies grow exponentially to fill the galaxy: Crawford, Ian (2000) Where are they? Maybe we are alone in the galaxy after all, Scientific American, July.
The time it takes to colonise the galaxy depends on the speed of the colony ships and the time it takes for new colonies to create colony ships of their own.
The remarkable thing is that the home planet only needs to send out two successful colony expeditions to start the colonisation wave. That's it. Just two ships to colonise the galaxy. One of the most high impact projects one can think of.
That would work. Or an information symbol ⓘ (the letter 'i' in a circle).
Or a green sprout. Some games have that to indicate new players.
No worries, thanks for renaming it. I have added a short lead section.
Hello! The EA Hub has some scripts and slides in English: https://resources.eahub.org/events/intro/
Try contacting a staff member from the Groups Team, e.g. Catherine Low, for tips and pointers: https://www.centreforeffectivealtruism.org/team/
Humanitarian Assistance for Wild Animals
New article about wild animal suffering, interventions, genome editing and gene drives:
Johannsen, Kyle (2021). Humanitarian Assistance for Wild Animals. The Philosophers' Magazine 93:33-37. Available on PhilArchive: https://philarchive.org/archive/JOHHAF-5
Good idea, but one issue with donating books to a library is that the librarian still has to decide whether to accept or reject the donation. Most librarians are very selective about what gets included and what gets weeded out of their collection.
Another option is to use the library website and find the "Suggest items for the library" web form. (Search the library catalogue first to see whether the library already holds the item.) If the librarian decides to purchase the book, it is completely funded by the library budget.
You can suggest the format too: print, ebook or both. I would say both because both print and ebook formats have their respective strengths and limitations.
For university libraries, if you mention the course or unit (e.g. ethics, philosophy) that would benefit from the book, it helps the librarian to justify the purchase.
To add to arguments for inclusion, here’s an excerpt from an EA Forum post about key figures in the animal suffering focus area.
“Major inspirations for those in this focus area include Peter Singer, David Pearce, and Brian Tomasik.”
Four focus areas of effective altruism by Luke_Muehlhauser, 8th Jul 2013
David Pearce’s work on suffering and biotechnology would be more relevant now than in 2013 due to developments in genome editing and gene drives.
"Genome editing and the replacement, reduction and relief of pain as a cause area"
- A few individuals lead near-normal lives with the complete absence of pain due to natural genetic variations.
- Genome editing has the potential to replicate these genetic variations in all animals and people.
- The problem with eliminating pain is its important role in the detection and avoidance of injury.
- The challenge is to remove pain while retaining this function. Options include these 3Rs (inspired by the 3Rs of animal testing):
- Replace pain with a painless sensory system. Complete absence of pain while retaining the detection and avoidance of injury.
- Reduce the maximum level of pain from 10 to a 1 or 2 on the pain scale. Keep pain but reduce its severity.
- Relieve pain for those who, out of choice or necessity, have not replaced or reduced pain.