Protonmail and Signal are e2e encrypted messaging mediums.
But depending on how paranoid the users need to be these systems might not provide enough guarantees, since you would need to trust the servers not to MITM. Unless you do some sort of in-person key-exchange.
But I'm definitely not an expert. In general I think there are plenty of experts that know exactly how to handle these things and they're pretty easy to contact.
Edit: I agree with acylhalide comment, if you have government-level actors this is potentially not enough.
I regret not donating more and not donating earlier. I have way too much savings, and my family is very supportive and would be happy to host me if I end up unable to pay rent.
I regret donating directly to GiveWell's top charities instead of their "all grants fund" (then Maximum Impact Fund). Especially since many of those charities have programs of varying cost-effectiveness.
Contradicting my first point, I regret donating to various random EA charities instead of focusing my donations on the most promising fund after a lot of research. I don't think I ever was at a scale where splitting made sense.
Lastly, I regret not networking more and earlier with EAs doing exciting stuff that might need some liquidity or fallback options, in case some promised small (<5000€) grant doesn't work out or takes months. Or if they can't afford to pay for coaching/counseling.
Have you considered comparing the role at DeepMind with similar roles at e.g. Redwood and Anthropic? I think there is no consensus on which one would be most impactful, but I think both are less controversial than DeepMind (but could be very wrong, definitely not an expert, just would suggest considering >=3 jobs before picking one)
I'm thinking of making an alternative to Guesstimate that's more scalable and more easily integrated with Google Sheets, but I'm unsure about what would be the actual value for researchers. Especially now that QURI is focusing on Squiggle.
What cause area are you most interested in? Would you spend only those 3 days, or would you be interested in using those days to familiarize yourself with a project and be open to contributing more in the future?
Thanks so much for writing this! I think it could be a top-level post, I'm sure many others would find it very helpful.
My 2 cents:
2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.
Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question). We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it's a huge difference from the "cult" mindset.
I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better. From my personal perspective this is strongly related to the point on uncertainty: I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct". I don't know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.
You can see the full list in the linked article. In order of importance:
Feeling unhealthy on the veg*n diet
Low identification with veg*nism
Believing society perceives veg*nism negatively
Low autonomy support
Cultural influence making it more difficult to go veg*n
Weak habit formation around choosing veg*n food
Difficulty finding or preparing veg*n food
Feeling ashamed of one’s veg*n diet
Low personal control over food
Small veg*n network
Feeling that veg*nism hasn’t positively impacted one’s health goals
Frequent cravings for animal products
Specifically, people who felt unhealthy on their veg*n diet were more than three times as likely to abandon it within the first six months (30% vs. 8%). People who did not see veg*nism as part of their personal identity were about twice as likely as others to abandon it (16% vs. 8%). And people who thought society perceives veg*nism negatively were about 1.5 times as likely as others to abandon their diet (13% vs. 8%)
Mass Drug Administration to combat Lymphatic filariasis (MDA LF): MDA LF is much less cost-effective (unpublished CEA estimate) compared to other intervention areas we looked at. Since the program is a mass drug administration rather than a targeted approach, the prevalence has to be above a certain rate for it to be cost-effective. The microfilaria prevalence is too low for MDA LF to be cost-effective in India (9). Infection rates in other countries are also not sufficiently high. The majority of those infected never exhibit symptoms (10), of those who do, only a small percentage develop severe symptoms that cause large problems like social ostracization and depression. Furthermore, crowdedness in this intervention is fairly high. There are already eight charities active in the area, and the crowdedness in the areas of greater microfilaria prevalence is especially high. The Indian government claims to cover 85% (11) of the country with preventative medication. While there is a problem of people not taking the medication once receivedthey receive it (12), this is not a straightforward or cost-effective problem to solve.
the quality of the best products produced by EAs and the best products produced by professionals seemed to be about the same, on average, as assessed (blinded) by Guille. This was a small sample assessed by one person, so it doesn’t constitute much evidence.
On "beneficiaries preferences" I agree with you that the vast majority of EA in practice discounts them heavily, probably much more than when the post I linked to was written.
They are definitely taken into account though. I really like this document from a GiveWell staff member, and I think it's representative of how a large part of EA not focused on x-risk/longermism thinks about these things. Especially now that GiveDirectly has been removed from GiveWell recommended charities, which I think aura-wise is a big change. But lots of EAs still donate to GiveDirectly, and GiveDirectly still gives talks in EA conferences and is on EA job boards.
I personally really like the recent posts and comments advocating for more research, and I think taking into account beneficiaries preferences is a tricky moral problem for interventions targeting humans.
Thanks for posting this, super interesting! I was nicely surprised to find this forum post in the Wikipedia page on the "Scientific Charity Movement".
I think that post highlights some important differences. Some interesting quotes: > EAs are also much less confident that they know what people need better than they do. > To many EAs, dividing the poor into deserving and undeserving groups just doesn't make sense > standards of evidence are much better now than they were over a century ago > errors of charity that EA is a response to generally include the errors of SC > The main way I see the comparison as a warning is that EA could end up somewhere where EA continues to talk in a scientific way, confidence goes up, standards of evidence fall, and EA ends up pushing hard on things that aren't actually that important.
This seems high, where does it say so in the paper? The Tomasik article you use for wild mammals estimates 0.1 to 0.4 trillion wild birds.
I don’t think it makes sense to say that on a given day there are, say, 26 billion poultry alive though, given death rates in farms. You’d need to do more stats to get an estimate of the number of poultry birds alive right now.
You might find the thread "The AI messiah" and the comments there interesting.
You quote AI results from the 70s and 90s as examples of overly optimistic AI predictions.
In recent years there are many many examples of predictions being too conservative (e.g. Google beating Lee Sedol at Go in 2016, GPT-3, Minerva, Imagen ...). Self-driving seems to be the only field where progress has been slower than some expected. See e.g. https://bounded-regret.ghost.io/ai-forecasting-one-year-in/? "progress on ML benchmarks happened significantly faster than forecasters expected" (even if it was sensitive to the exact timing of a single paper, I think it's a useful data point).
Would that make you increase the importance of AI risk as a priority?
But, if your philanthropy is explicitly going against what the recipient would choose for themself, well... From my perspective (as Vanessa this time), this is not even altruism anymore. This is imposing your own preferences on other people
Would this also apply to e.g. funding any GiveWell top charity besides GiveDirectly, or would that fall into "in practice, this is the best way to maximize the recipient's decision-utility"?
I don't think most recipients would buy vitamin supplementation or bednets themselves, given cash. I guess you could say that it's because they're not "well informed", but then how could you predict their "decision utility when well informed" besides assuming it would correlate strongly with maximizing their experience utility?
Super excited to see more interest in this space, and people starting things in general, kudos!
Have you talked with the people working on Mind Ease and/or Canopie? (As far as I understand, Canopie was originally a Charity Entrepreneurship incubated charity, then became a for-profit). Also might be interesting to talk with the people that worked on hippo.
by helping other people as much as possible, without any expectation of your favours being returned in the near future — you end up being much more successful, in a wide variety of settings, in the long run.
This is what you mention, and I agree with it. But
if you and I share the same values, the social situation is very different: if I help you achieve your aims, then that’s a success, in terms of achieving my aims too. Titting constitutes winning in and of itself — there’s no need for a tat in reward. For this reason, we should expect very different norms than we are used to be optimal: giving and helping others will be a good thing to do much more often than it would be if we were all self-interested.
One of the incredible strengths of the EA community is that we all share values and share the same end-goals. This gives us a remarkable potential for much more in-depth cooperation than is normal in businesses or other settings where people are out for themselves. So next time you talk to another effective altruist, ask them how you can help them achieve their aims. It can be a great way of achieving what you value.
I really think altruism/value-alignment is a strength, and a group would lose a lot of efficiency by not valuing it.
Rather than say I'm not altruistic, I mostly mean that: *I'm not impartial to my own welfare/wellbeing/flourishing
To me, those are very different claims!
10% is not that big an ask (I can sacrifice that much personal comfort)
That's very relative! It's more than what the median EA gives, it's way more than what the median non-EA gives. When I talk to non-EA friends/relatives about giving, the thought of giving any% is seen as unimaginably altruistic.
Even people donating 50% are not donating 80%, and some would say it's not that big of an ask. IMHO, claiming that only people making huge sacrifices and valuing their own wellbeing at 0 can be considered "altruists" is a very strong claim that doesn't match how the word is used in practice.
But I would reorient my career to work on the most pressing challenges confronting humanity given my current/accessible skill set. I quit my job as a web developer, I'm going back to university for graduate study and plan to work on AI safety and digital minds.
I think this is very admirable and wish you success! If indeed you're acting exactly like someone who straightforwardly wanted to improve the world altruistically, that's what matters :)
Edit: oh I see you were also donating 10%, that's also very altruistic! (At least from an outside view, I trust you on your motivations)
If you were, for instance, a grantmaker, these might look very different.
Strongly upvoted, I would say that for most roles these do look very different. The "altruism" part of "effective altruism" is something I really value. I would much rather collaborate with someone that wants to do the most good, than with someone that wants to get the most personal glory or status. For example, someone that cares mostly about personal status will spend much less time helping others, especially in non-legible ways.