Thanks for the feedback Dan. Maybe I'm using the vocabulary incorrectly - does collective specifically mean 1 person 1 vote? I do specifically avoid saying democratic and mention market-based decision making in the first sentence.
It's not at all obvious to me that putting market-based feedback systems in place would look like the funding situation today. I think it's worth pushing back on the assumption that EA's current funding structure rewards the best performers in terms of asset allocation.
I want to push back a bit on my own intuition, which is that trying to build out collective (or market-based) decision-making for EA funding is potentially impractical and/or ineffective. Most EAs respect the power of the "wisdom of crowds", and many advocate for prediction markets. Why exactly does this affinity for markets stop at funding? It sounds like most think collective decision-making for funding is not feasible enough to consider, and that's 100% fair, but were it easy to implement, would it be ineffective?
Again, my intuition is to trust the subject matter experts, to rely on the institutions that we've built for this specific task. But I invest in index funds, I believe that past performance is no guarantee of future results, and I trust that aggregate markets are typically more accurate than most experts. Have EA organizations proved that they are essentially super-forecasters, that they consistently "beat the EA market" in terms of ROI? Perhaps this metaphor is doomed -- these EA orgs are also market-makers as well. Who better to place bets than those with insider knowledge?
At the very least, this experiment seems ripe for running if it hasn't been already. It's far beyond me to figure out how to structure it, I'll leave that to those like Nuno, who laid out a potential path. But we're making a rather large assumption that the collective is by default ineffective.
EDIT: someone pointed out that I'm conflicting prediction markets w/ collective decision making. I want to clarify that my comment is referring to market-based decision making (basically prediction markets), which I view as a subset of collective decision making. Maybe my EA vocab is off though.
I wouldn't call it predatory - in fact, every significant work test / trial I've done has been paid, which is remarkably progressive!
However, I empathize with your pain - interviewing for EA jobs is a rigorous and rather impersonal gambit. As far as I know, this is a feature not a bug. It's frustrating but I try to cut them some slack. There are many applicants, EA orgs are almost always short-staffed and they're trying to avoid bias. Most EAs want an EA job but these hiring processes are optimized to test this desire.
Knowing this, I don't bother applying for an EA job unless I truly think that my application can be competitive and that I actually want the job (not a bad heuristic to follow in general).
I'm hopeful for lab grown salmon (see: Wild Type Foods), but if all else fails and the taste for salmon proves to be too sticky, I could imagine a counterintuitive campaign that specializes salmon to be "only for holidays." Of course, I'm sure this could easily backfire. This kind of work is hard!
Could an increase in salmon preference on Christmas also lead to higher preference for salmon year-round? More people are introduced to the fish, learn how to cook it, etc. Perhaps another downstream effect to consider in your model, although difficult to quantify and hard to know if your campaign has much of an impact here.
I'm very thankful for EVF and associated orgs, and as referenced by others, it's understandable how/why the community is currently organized this way. Eventually, depending on growth and other factors, it'll probably make sense for the various subs to legally spin off, but I'm not sure if this is high priority - it depends on just how worried EAs are about governance in the wake of this past month.
I will say, conflict of interest disclosures are important but seems like they may be doing a lot of work here. As far as I can tell, leadership within these organizations also function independently and they're particularly aware of bias as EAs so they've built processes to mitigate this. But being aware of bias and disclosing it doesn't necessarily stop [trustworthy] people from being biased (see: doctors prescribing drugs from companies that pay for talks.) Even if these organizations separated tomorrow, I'd half expect them to be in relative lock-step for years to come. Even if these orgs never shared funding/leadership again, they're in the same community, they'll have funders in common, they'll want to impress the same people, so they'll make decisions with this in mind. I've seen this first-hand in every [non-EA] org I've ever been a part of, in sectors of all sizes, so moving forward we'll have to build with this bug in mind and decide just how much mitigation is worth doing.
I'm aware that none of this is original or ground-breaking but perhaps worth reiterating.
This is a little facetious, but does anyone else find themselves caveating more often these days, just in case...
"My point is just that this nightmare is probably not one of a True Sincere Committed EA Act Utilitarian doing these things" - I agree that this is most likely true, but my point is that it's difficult to suss out the "real" EAs using the criteria listed. Many billionaires believe that the best course of philanthropic action is to continue accruing/investing money before giving it away.
Anyways, my point is more academic than practical, the FTX fraud seems pretty straight forward and I appreciate your take. I wonder if this forum would be having the same sorts of convos after Thanos snaps his fingers.
I don’t [currently] view EA as particularly integral to the FTX story either. Usually, blaming ideology isn’t particularly fruitful because people can contort just about anything to suit their own agendas. It’s nearly impossible to prove causation, we can only gesture at it.
However, I’m nitpicking here but - is spending money on naming rights truly evidence that SBF wasn’t operating under a nightmare utilitarian EA playbook? It’s probably evidence that he wasn’t particularly good at EA, although one could argue it was the toll to further increase earnings to eventually give. It’s clearly an ego play but other real businesses buy naming rights too, for business(ish) reasons, and some of those aren’t frauds… right?
I nitpick because I don't find it hard to believe that an EA could also 1) be selfish, 2) convince themselves that ends justify the means and 3) combine 1&2 into an incendiary cocktail of confused egotism and lumpy, uneven righteousness that ends up hurting people. I’ve met EAs exactly like this, but fortunately they usually lack the charm, knowhow and/or resources required to make much of a dent.
In general, I’m not surprised with the community's reaction. Best case scenario, it had no idea that the fraud was happening (and looks a bit naïve in hindsight) and its dirty laundry is nonetheless exposed (it’s not so squeaky clean after all). Even if EA was only a small piece in the machinery that resulted in such a [big visible] fraud, the community strives to do *important* work and it feels bad for potentially contributing to the opposite.
Thanks for the feedback, I appreciate it! SBF has clearly been interested in EA for a long time, but taking him seriously as a thought leader is pretty new. @donychristie mentioned that he was an early poster child of earning-to-give, which I also vaguely remember, but his elevation in status is a recent phenomenon.
Regardless, my main point is that EA should be sensitive to the reputation of its funders. Stuff like this feels off even if it may come from a well-intentioned place.
I was honestly surprised how quickly SBF was "platformed" by EA (but not actually surprised, he was a billionaire shoveling money in EA's direction). One day I looked up and he was everywhere. On every podcast I follow, fellow EAs quoting him, one EA told me how much they wanted to meet his brother... it felt unearned/uncanny. For me, a main takeaway is that the community should be more cautious about the partners that it aligns with and also create a more resilient infrastructure to mitigate blowback when this stuff happens (it'll happen again, it always does with wealthy donors). When the major consultancies recently started getting flack for unsavory clients, they spun up teams to assess the ethical aspect of contracts and started turning down business that didn't align with certain standards.
FYI I'm not a "de-platforming" person, just felt like SBF immediately became a highly visible EA figure for no good reason beyond $$$.
Interested to hear why people are downvoting this comment... would love to engage in a discussion!
I wanted to keep the meat of my argument above as concise as possible, but also want to mention that EAs largely fail to grasp 1) what politics do to politicians and 2) the unknowable, cascading, massive impacts of political decisions. Politicians change their minds, trade votes, compromise, make decisions based on reelection. And the decisions they make reverberate. None of this is predictable or measurable, so it's hard to imagine how to classify it as effective altruism.
I appreciate you laying out the specifics here! As someone who grew up in/around politics, the ineffectiveness of a freshman member of congress feels obvious. I want to amplify the concern for politics & EA.
EA should seriously consider drawing the line at financial support. Some EAs want EA-aligned candidates to run, and that generally feels like a good idea. Rational politicians who care about important issues are better, right? They know what's best? Let's assume that's true, even if that's quite an assumption to make. Representatives vote on every bill, many of which have little to do with EA. How should we expect an EA candidate to vote on non-EA issues? If EA publicly and significantly backs a specific candidate, EA becomes at least a little culpable for all of a candidate's views, not just the EA ones. Furthermore, there's no guarantee that a candidate will vote how they say they'll vote. Moreover, even if they do vote how they say they'll vote, that doesn't guarantee results, whether that be winning a vote or operationalizing a government program that proves to be effective. There's so much uncertainty here. How can we as EAs truly calculate return on investment in campaign politics? I don't think we can with any real accuracy. There's nothing wrong with supporting candidates that you like, but this seems to fall far short of what we typically expect in terms of evidence. It feels like informed voting, not EA.
Agree that running EA candidates may polarize issues that are refreshingly nonpartisan. This would be an own-goal of sizable consequence.
Politics is a high-leverage arena, so it's logical that EAs are attracted to it, especially now that there's money floating around. EA as a (mostly) nonpartisan movement has higher potential with less downside. Channeling the community's energy into lobbying and advocating for EA-aligned policy is straightforward, effective and transparent. "This strongly suggests that influencing current elected officials, rather than attempting to directly hold political power, plays more towards our strengths." I couldn't agree more.
ACT recently did a write-up on nightmares if you're interested!
+1 - Ecosystem services (and more generally, Earth systems) are infamously hard to pin down, which is why I often taken any bottom line analyses of climate change with gigantic grains of salt (in both directions). For example, there's currently a gold rush on technology to quantify the value of soil sequestration, forest sequestration, etc, and as far as I can tell, experts are still bickering over the basics on how to calculate these data with any accuracy. Those are just a few small pieces of a very very large pie that is difficult to value. Perhaps the modeling takes these massive uncertainties into consideration, but I'm skeptical (and will have to do some research of my own).
Lots of good stuff here! I work in the climate change field so I have expertise here, although it's crucial to note I haven't spent my career comparing the risk that climate change poses relative to the other big topics that concern EAs.
It's not surprising given my biases that I always grimace a little when EAs talk about climate. It's an easy target - lots of attention, tons of media hubbub, plenty of misinformed opinions and outright grifters, and of course, lack of direct existential threat. Hey look, here's an issue that most EAs care about that's already getting attention and talent, and if you run the numbers, according to our values...that's more than enough attention! So come work on an underserved issue like AI or pandemic risk! It makes sense to use it as a point of contrast and I'm glad that 80K Hours still takes climate change seriously. However, the framing could maybe be better, I'm not sure, I need to think about it more.
One small qualm within the well researched piece - the plastic bag bit is off. Disregarding the fact that plastic bag fees aren't just about carbon reductions, that graph shows that as long as you don't make reusable bags out of cotton, reusable bags do exactly what you want them to do. Now, that's not to say those policies are great, there's plenty of issues with them, but I don't find the example to be compelling evidence, especially because no policy demands cotton bags nor do most people use cotton bags. I don't remember that Danish LCA to be particularly good either.
Nick - absolutely! Making relocation more effective is imperative whether it be international or domestic. I believe that domestic migration is wildly underserved but the work done on that topic can and should be expanded to help facilitate immigration.
Thanks for sharing, Chris! I've been meaning to reach out to Teleport for a while to learn about their offerings. They've put together some decent data but the UI lacks something integral. I do like their intake survey as a way to narrow choices (a la @evelynciara's comment). The entire platform feels... abandoned? Could be a good partner down the line for the data side.
Fantastic! Yeah, the basic idea isn't novel - I've heard it a dozen times over the years. However, as far as I can tell, no one has delivered on it, probably because it's not particularly monetizable. Ultimately I think this kind of product is best suited to be a loss leader or public good.
Adding this as a separate comment to maintain some organization - I've mentioned this in comments on other posts, but I really think that there's room for an organization or mechanism that identifies and rewards undervalued EA-related work that's already being done at existing non-EA institutions. In the context of your post, it would further normalize the idea that plenty of good EA work happens outside of EA.
Great post/suggestions, I especially agree with target outreach. I want to amplify something that's touched on but not explicitly stated:
EA is simply a lens/framework - you can apply these principles anywhere, and the impact may be significant! I work in environmental sustainability / climate change mitigation and notice that the movements closely mirror each other because:
- Maximizing impact is the overarching principle (at least theoretically...)
- It’s a rapidly growing and trendy field.
- Until now, amateurs/volunteers/hobbyists have done a lot of the work.
In both EA and sustainability, people clamor for high-profile direct impact roles but they're incredibly competitive, the roles may lack the imagined leverage and candidates spend an outsized amount of time trying to get them. It’s difficult to quantify, but many (most?) people will be more impactful applying a EA framework to non-EA specific work. The EA movement is still nascent enough that it makes sense to encourage people to apply to EA-specific roles or start new organizations, but eventually the messaging will transition to how you can apply EA to any job you take, not how you can become an EA superstar.
I've been thinking about this lately, especially since I've started to apply to EA-specific opportunities. It does seem like EA orgs use intelligence as a main filter for hiring, which makes sense given the work (and is far better than plain-old credentialism), but I sometimes wonder if they're filtering out valuable candidates who are more clever, empathetic or dogged than high IQ. Most EA organizations are small so I expect this will change as the community scales to become more inclusive to the full spectrum of skillsets. Note that this is a perspective from the outside looking in and is completely anecdotal. I could be mistaken.
Great idea! One way that I could see an org like this staying busy when not responding to emergencies is that it could train other more specialized organizations on how to... put together a team to respond to emergencies. This could amplify its impact and help with networking. ALERT could even train PMs to deploy to other organizations in emergency situations. A lot of institutions are already optimally positioned to do good but lack the capacity in emergencies.
My recent idea on the Future Funds' Project Ideas post may be relevant? https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=qeeCrLXA5dJCAkjTQ
Basically, there should be some function in which to reward undervalued EA-related work. My idea focused on financial rewards, could extend it to include some prestige. Not exactly sure how to confer social rewards - how people gather and socialize doesn't necessarily correlate with achievement (or maybe it does in the EA community... I wouldn't really know).
Peter - great idea, I've been doing some thinking on this as well, will probably send you an email!
Bonuses/prizes/support for critically situated or talented workers
Empowering Exceptional People
Work that advances society should be rewarded and compensated at fair market value. Unfortunately, rewards are often incommensurate, delayed or altogether unrealized. We'd be excited to see a funding process that 1) identifies work that’s under appreciated by or insulated from the market and 2) provides incentives for workers/teams to stay put and complete said work.
EA often focuses on building new organizations to solve problems, but talented people are already situated within organizations that can foster real change. In government, academia, large legacy companies and non-profits, incentives are usually in the form of slowly accrued assets like prestige, job security or future private sector paydays. Unfortunately, these are also organizations that are tasked with addressing urgent matters such as climate change, pandemics, housing shortages, etc.
How do we incentivize important work outside of the market’s reach? How do we incentivize talented but poorly compensated workers to stay at essential but bureaucratic organizations that are optimally positioned to foster change?
- Challenge Prizes: Small to medium sized prizes or donations for the completion of work that’s going too slowly. This provides a market signal that stresses urgency in no uncertain terms. This is similar to moonshots but more immediate/focused/localized.
- Bonuses: Set up externally funded performance bonuses for well-placed individuals that are at low-paying but important organizations. Or external signing bonuses for obtaining high-leverage roles in these institutions.
- Coddling Services: Basically, personal assistant services for identified high-performing individuals that could use more time focusing (this is similar to an idea already posted by @JanBrauner).