I'm looking into (informal) ways to transfer message history from mobile to desktop, and will get back to you. (It would be great to have a Signal expert/enthusiast on board for these kinds of questions!)
The solution that I'm pretty convinced would work is to periodically backup the desktop Signal files on a secure cloud storage. But again, I haven't tried this.
For moving message history from desktop to desktop, there is no official way yet endorsed by Signal developers. But there are unofficial ways (that I have yet to test myself, and would be great to test! Please let us know if you tried this method either successfully or unsuccessfully.)
My understanding is that after one links their multiple desktop devices to their main mobile-phone account (via this method), all Signal messages to their main account from that point on get sent to the linked desktop devices as well. This means the following seems true:
Suppose an EA believes that most other EAs will eventually switch to Signal for all internal messages (which I think is likely due to its substantial privacy benefits).
Then, this EA would maximize the proportion of their messages stored in their devices' Signal apps by switching to Signal sooner rather than later, and linking all their desktop devices sooner rather than later.
In other words, it is plausible that the EAs who switch to Signal as early as possible (and link all their desktop devices as early as possible) will comparatively benefit, at least when it comes to the proportion of message history stored in their devices' Signal apps. In contrast, EAs who do so later would miss out on this benefit.
Consequently, as someone who predicts that most EAs should and will eventually switch to Signal, my enthusiastic suggestion is that EAs should make a Signal account and link all their desktop devices to it today!
Thank you so much for this really informative analysis! I really appreciate it.
The data seem to be consistent with anecdotes that there are many more people looking for EA jobs than there are positions currently.
If this pattern is true, then one way to match more effective altruists with currently scarce EA jobs could be to establish new organizations and projects in undersaturated EA hubs (underrepresented U.S. states and countries), as opposed to just in oversaturated EA hubs (e.g., San Francisco Bay Area).
Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.
I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.
Thank you so much for this extremely important and helpful guide on EA messaging, Julia! I really appreciate it, and hope all EAs read it asap.
Social opinion dynamics seem to have the property where some action (or some inaction) can cause EA to move into a different equilibrium, with a potentially permanent increase or decrease in EA’s outreach and influence capacity. We should therefore tread carefully.
Unfortunately, social opinion dynamics are also extremely mysterious. Nobody knows precisely what action or what inaction possesses the risk of permanently closing some doors to additional outreach and influence. Part of the system is likely inherently unpredictable, but people are almost certainly not near the optimal level of knowledge about predicting such social opinion dynamics.
But perhaps EA movement-builders are already using and improving a cutting-edge model of social opinion dynamics!
Thanks so much for this extremely important and well-written post, Theo! I really appreciate it.
My main takeaway from this post (among many takeaways!) is that EA outreach and movement-building could be significantly better. I’m not sure yet on the clear next steps, but perhaps outreach could be even more individualized and epistemically humble.
One devil’s-advocate point on your point that “while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made up of such people would automatically be better than this one.” Despite Goodhart’s Law, I think that there is some definition of HEA such that maximizing the number of HEAs is the best practical strategy for cooperative movement-building. Having a lot of dedicated people in a cooperative group is very important, perhaps the most important factor in determining the success of the group. More complicated goals/guidelines for movement-builders are harder to use, both for individuals and for group coordination.
Thanks so much for your kind words on our post, Nick! I really appreciate it.
One of the non-governmental barriers to relocation for international folks is the general non-accessibility of relevant information. Even something as basic as finding an apartment to rent in a foreign city could present a quite high barrier (and certainly a perceived barrier) to relocation.
A thought: Especially when enabled by technology, people are very capable. In theory, a person can easily offset the negative impact of their greenhouse gas emissions and have a lot of time and resouces left over to pursue positive impact. For example, by donating a fraction of their money to carbon offsetting projects and not having a polluting lifestyle, the median American can easily have a net reducing effect on global greenhouse gas emissions throughout their lifetime. Also, I think the median person in the world can in theory achieve a net reducing effect as well, by devoting a fraction of their time and resources to planting trees (nature's baseline technology for carbon capture).
So perhaps the right framing isn't "Should you have children despite climate change?" One alternative framing is: suppose you want to influence the next generation, who will either capably help the world or capably harm the world. Should you do it by parenting and influencing your own children, or by influencing other people's children?
I think most EAs favor the latter option, and indeed there is a compelling argument in favor of it. Humans are perhaps the only species whose primary mode of phenotypic inheritance is learning knowledge and values from group members: including not only parents, but also a lot of other people. This is why we are so adaptable and capable.
But for EAs who derive a lot of pleasure from parenting and are high-fidelity influencers as parents (i.e., have a high guarantee of influencing their children to have similar values as them), I think parenting can be an excellent use of their time and resources. I think optimal parenting is a domain which is quite neglected by EAs, and hope that this changes moving forward.
Thank you so much for your kind words, Max! I'm extremely grateful.
I completely agree that if (a big if!) we could identify and recruit AI capabilities researchers who could quickly "plug in" to the current AI safety field, and ideally could even contribute novel and promising directions for "finding structure/good questions/useful framing", that would be extremely effective. Perhaps a maximally effective use of time and resources for many people.
I also completely agree that experiential learning on how to talent-scout and recruit AI capabilities researchers is likely to be also helpful for recruiting for the AI safety field in general. The transfer will be quite high. (And of course, recruiting junior research talent, etc. will be "easy mode" compared to recruiting AI capabilities researchers.)
Thank you so much for your feedback on my post, Peter! I really appreciate it.
It seems like READI is doing some incredible and widely applicable work! I would be extremely excited to collaborate with you, READI, and people working in AI safety on movement-building. Please keep an eye out for a future forum post with some potential ideas on this front! We would love to get your feedback on them as well.
(And thank you very much for letting me know about Vael's extremely important write-up! It is brilliant, and I think everyone in AI safety should read it.)
Quoted from an EA forum post draft I'm working on:
“Humans are currently the smartest being on the planet. This means that non-human animals are completely at our mercy. Cows, pigs, and chickens live atrocious lives in factory farms, because humans’ goal of eating meat is misaligned with these animals’ well-being. Saber-toothed tigers and mammoths were hunted to extinction, because nearby humans’ goal was misaligned with these animals’ survival.
But what if in the future, we were not the smartest being on the planet? AI experts predict that it’s basically a coin flip whether or not the following scenario happens by year X. The scenario is that researchers at Deepmind, Google, or Facebook accidentally create an AI system that is systematically smarter than humans. If the goal of this superintelligent, difficult-to-control AI system is accidentally misaligned with human survival, humanity will go extinct. And no AI expert has yet convinced the rest of the field that there is a way to align this superintelligent AI system’s goal in a controlled, guaranteed manner.”
If you happen to think of any suggestions, any blind spots of the post, or any constructive criticisms, I'd be extremely excited to hear them! (Either here or in private conversation, whichever you prefer.)
Thanks so much for your comment, Owen! I really appreciate it.
I was under the impression (perhaps incomplete!) that your definition of "phase 2" was "an action whose upside is in its impact," and "phase 1" was "an action whose upside is in reducing uncertainty about what is the highest impact option for future actions."
I was suggesting that I think we already know that recruiting people away from AI capabilities research (especially into AI safety) has a substantially high impact, and this impact per unit of time is likely to improve with experience. So pondering without experientially trying it is worse for optimizing its impact, for reducing uncertainty.
The best use of time and resources (in the Phase 2 sense) is probably to recruit AI capabilities researchers into AI safety. Uncertainty is not impossible to deal with, and is extremely likely to improve from experience.
I completely agree with the urgency and the evaluation of the problem.
In case begging and pleading doesn't work, a complementary method is to create a prestige differential between AI safety research and AI capabilities research (i.e., like that between green-energy research and fossil fuels), with the goal of convincing people to move from the latter to the former. See my post for a grand strategy.
My prior is that one's degree of EA-alignment is pretty transparent. If there are any grifters, they would probably be found out pretty quickly and we can retract funding/cooperation from that point on.
Also, people who are at a crossroads of either being EA-aligned or non-EA aligned (e.g., people who want to be a productive member of a lively and prestigious community) could be organizationally "captured" and become EA-aligned, if we maintain a high-trust, collaborative group environment.
A general class of problems for effective altruists is the following:
In some domains, there are a finite number of positions through which high-impact good can be done. These positions tend to be prestigious (perhaps rationally, perhaps not). So, there is strong zero-sum competition for these positions. The limiting factor is that effective altruists face steep competition for these positions against other well-intentioned people who are just not perfectly aligned on one or more crucial issues.
One common approach is to really help the effective altruists to break through this competition. But this is hard. Another common approach is to try to convince non-effective altruists who have successfully broken into these positions to be more EA-aligned. But convincing experienced people is often difficult (you can't teach an old dog new tricks, generally speaking).
A thought I can't shake is that if we could reduce the competition somehow (expand the pie, target young and high-potential people, and more controversially, to convince non-EA-aligned people to drop out of the race) it would be much more feasible.
So one alternative is to have a preprint server like arXiv (where papers can be posted) that directly serves as a journal, potentially with peer reviews that are also posted. Independent of paper availability to the public, this would also save researchers' time. (Instead of formatting papers to fit the Elsevier guidelines, they could be doing more research or training new researchers.)
I think so too! A strong anecdote can directly illustrate a cause-and-effect relationship that is consistent with a certain plausible theory of the underlying system. And correct causal understanding is essential for making externally valid predictions.
My intuition is that the priority for funding criticism of EA/longtermism is low, because there will be a lot of smart and motivated people who (in my opinion, because of previously held ideological commitments; but the true reason doesn’t matter for the purpose of my argument) will formulate and publicize criticisms of EA/longtermism, regardless of what we do.
They can be (deterministic Bayesian updating is just causal inference), but they can also not be (probabilistic Bayesian updating requires a large sample size; also, sampling bias is universally detrimental to accurate learning).
For many different types of talented people, the harm to the Russian government from their emigration might be overstated (at least the short term harm), because it’s economy is disproportionately based on oil and gas. Taxes from citizens’ economic activity are not as important.
But the strong case for open immigration does not require this harm to be true.
It's plausible that compared to a stable authoritarian nuclear state, an unstable or couped authoritarian nuclear state could be even worse (in worst-case scenario and even potentially in expected value).
For a worst-case scenario, consider that if a popular uprising is on the verge of ousting Kim Jong Un, he may desperately nuke who-know's-where or order an artillery strike on Seoul.
Also, if you believe these high-access defectors' interviews, most North Korean soldiers genuinely believe that they can win a war against the U.S. and South Korea. This means that even if there is a palace coup rather than a popular uprising, it's plausible that an irrational general rises to power and starts an irrational nuclear war with the intent to win.
So I think it's plausible that prevention is an entirely different beast than policy regarding already existing stable, authoritarian, and armed states.
Research on how to minimize the risk of false alarm nuclear launches
Preventing false alarm nuclear launches (as Petrov did) via research on the relevant game theory, technological improvements, and organization theory, and disseminating and implementing this research, could potentially be very impactful.
Facilitate interdisciplinarity in governmental applications of social science
Values and Reflective Processes, Economic Growth
At the moment, governmental applications of social science (where, for example, economists who use the paradigm of methodological individualism are disproportionately represented) could benefit from drawing on other fields of social science that can fill potential blind spots. The theory of social norms is a particularly relevant example. Also, behavioral scientists and psychologists could potentially be very helpful in improving the judgement of high-impact decision-makers in government, and in improving predictions on policy counterfactuals via filling in previous informational blind spots. Research and efforts to increase the consideration of diverse plausible scientific paradigms in governmental applications of social science could potentially be very impactful.
Increase the number of STEM-trained people, in EA and in general
Economic growth, Research that can help us improve
Research and efforts to increase the numberof quantitatively skilled people in general, and targeted EA movement-building efforts to them could potentially be very impactful. (e.g., AI alignment research, biorisk research, scientific research in general) Incentivizing STEM education at the school and university levels, facilitating immigration of STEM degree holders, and offering STEM specific guidance via 80,000 Hours and other organizations could potentially be very impactful.
Incentivize researchers to prioritize paradigm shifts rather than incremental advances
Economic growth, Research That Can Help Us Improve
There's a plausible case that societal under-innovation is one of the largest causes (if not the largest cause) of people's suboptimal well-being. For example, scientific research could be less risk-averse/incremental and more pro-moonshots. Interdiscplinary research on how to achieve society's full innovation potential, and movement-building targeted at universities, scientific journals, and grant agencies to incentivize scientific moonshots could potentially be very impactful.
A fast and widely used global database of pandemic prevention data
Speed is of the essence for pandemic prevention when emergence occurs. A fast and widely used global database could potentially be very impactful. It would be great if events like the early discovery of potential pandemic pathogens, doctors' diagnoses of potential pandemic symptoms, etc. regularly and automatically gets uploaded to the database, and high-frequency algorithms can use this database to predict potential pandemic outbreaks faster than people can do.
"find an existing youtube studio with some folks who are interested in EA"-> This sounds very doable and potentially quite impactful. I personally enjoy watching Kurzgesagt and they have done EA-relevant videos in the past (e.g., meat consumption).
"But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile." -> I agree!
Thanks so much for these suggestions! I would also really like to see these projects get implemented. There are already bootcamps for, say, pivoting into data science jobs, but having other specializations of statistics bootcamps (e.g., an accessible life-coach level bootcamp for improving individual decision-making, or a bootcamp specifically for high-impact CEOs or nonprofit heads) could be really cool as well.
Thanks for the great big-picture suggestions! Some of these are quite ambitious (in a good way!) and I think this is the level of out-of-the-box thinking needed on this issue.
This idea goes hand-in-hand with a previous post "Facilitate U.S. voters' relocation to swing states." For a project aiming to facilitate relocation to well-chosen parts of the US, it could be additionally impactful to consider geographic voting power as well, depending on the scale of the project.
I have never published a book, but some EAs have written quite famous and well-written books. In addition to what you suggested, I was thinking "80,000 pages" could organize mentoring relationships for other EAs who are interested in writing a book, writer's circles, a crowdsourced step-by-step guide, etc. Networking in general is very important for publishing and publicizing books, from what I can gather, so any help on getting one's foot in the door could be quite helpful.
Research and efforts to reduce broad meat consumption would help moral circle expansion, pandemic prevention, and climate change mitigation. Perhaps messaging from the pandemic-prevention angle (in addition to the climate change angle and the moral circle expansion angle) may help.