My experience talking with scientists and reading science in the regenerative medicine field has shifted my opinion against this critique somewhat. Published papers are not the fundamental unit of science. Most labs are 2 years ahead of whatever they’ve published. There’s a lot of knowledge within the team that is not in the papers they put out.
Developing a field is a process of investment not in creating papers, but in creating skilled workers using a new array of developing technologies and techniques. The paper is a way of stimulating conversation and a loose measure of that productivity. But just because the papers aren’t good doesn’t mean there’s no useful learning going on, or that science is progressing in a wasteful manner. It’s just less legible to the public.
For example, I read and discussed with the authors a paper on a bioprinting experiment. They produced a one centimeter cube of human tissue via extrusion bioprinting. The materials and methods aren’t rigorously controllable enough for reproducibility. They use decellularized pig hearts from the local butcher (what’s it been eating, what were its genetics, how was it raised?), and an involved manual process to process and extrude the materials.
Several scientists in the field have cautioned me against assuming that figures in published data are reproducible. Yet does that mean the field is worthless? Not at all. New bioprinting methods continue to be developed. The limits of achievement continue to expand. Humanity is developing a cadre of bioengineers who know how to work with this stuff and sometimes go on to found companies with their refined techniques.
It’s the ability to create skilled workers in new manufacturing and measurement techniques, skilled thinkers in some line of theory, that is an important product of science. Reproducibility is important, but that’s what you get after a lot of preliminary work to figure out how to work with the materials and equipment and ideas.
Imagine we can divide up the global economy into natural clusters. We'll refer to each cluster as a "Global Project." Each Global Project consists of people and their ideas, material resources, institutional governance, money, incentive structures, and perhaps other factors.
Some Global Projects seem "bad" on the whole. They might have directly harmful goals, irresponsible risk management, poor governance, or many other failings. Others seem "good" on net. This is not in terms of expected value for the world, but in terms of the intrinsic properties of the GP that will produce that value.
It might be reasonable to assume that Global Project quality is normally distributed. One point of possible difference is the center of that distribution. Are most Global Projects of bad quality, neutral, or good quality?
We might make a further assumption that the expected value of a Global Project follows a power law, such that projects of extremely low or high quality produce exponentially more value (or more harm). Perhaps, if Q is project quality and V is value, V=QN. But we might disagree on the details of this power law.
One possibility is that in fact, it's easier to destroy the world than to improve the world. We might model this with two power laws, one for Q > 0 and one for Q < 0, like so:
V=Q3, Q >= 0
V=Q7, Q < 0
In this case, whether or not progress is good will depend on the details of our assumptions about both the project quality distribution and the power law for expected value:
The size of N, and whether or not the power law is uniform or differs for projects of various qualities. Intuitively, "is it easier for a powerful project to improve or destroy the world, and how much easier?"
How many standard deviations away from zero the project quality distribution is centered, and in which direction. Intuitively, "are most projects good or bad, and how much?"
In this case, whether or not average expected value across many simulations of such a model is positive or negative can hinge on small alterations of the variables. For example, if we set N = 7 for bad projects and N = 3 for good projects, but we assume that the average project quality is +0.6 standard deviations from zero, then average expected value is mildly negative. At project quality +0.7 standard deviations from zero, the average expected value is mildly positive.
Here's what an X-risk "we should slow down" perspective might look like. Each plotted point is a simulated "world." In this case, the simulation produces negative average EV across simulated worlds.
And here is a Progress Studies "we should speed up" perspective might look like, with positive average EV.
The joke is that it's really hard to tell these two simulations apart. In fact, I generated the second graph by altering the center point of the project quality distribution 0.01 standard deviations to the right relative to the first graph. In both case, a lot of the expected value is lost to a few worlds in which things go cataclysmically wrong.
One way to approach a double crux would be for adherents of the two sides to specify, in the spirit of "if it's worth doing, it's worth doing with made up statistics," their assumptions about the power law and project quality distribution, then argue about that. Realistically, though, I think both sides understand that we don't have any realistic way of saying what those numbers ought to be. Since the details matter on this question, it seems to me that it would be valuable to find common ground.
For example, I'm sure that PS advocates would agree that there are some targeted risk-reduction efforts that might be good investments, along with a larger class of progress-stimulating interventions. Likewise, I'm sure that XR advocates would agree that there are some targeted tech-stimulus projects that might be X-risk "security factors." Maybe the conversation doesn't need to be about whether "more progress" or "less progress" is desirable, but about the technical details of how we can manage risk while stimulating growth.
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don't feel they have room to grow in terms of determining the expected value of the projects they're looking at. Very prepared to change my mind on this; I'm literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they've been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Your previous comment seemed to me to focus on demand and supply and note that they'll pretty much always not be in perfect equilibrium, and say "None of those problems indicate that something is wrong", without noting that the thing that's wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
In the context of the EA forum, I don't think it's necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let's say in a given year):
Grantmakers run out of money and aren't able to fund all high-quality EA projects.
Grantmakers have extra money, and don't have enough high-quality EA projects to spend it on.
Grantmakers have exactly enough money to fund all high-quality EA projects.
None of these situations indicate that something is wrong with the definition of "high quality EA project" that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.
No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck they're facing.
For the rest, I'd say that there's a difference between "willingness to work" and "likelihood of success." We're interested in the reasons for EA project supply inelasticity. Why aren't grantmakers finding high-expected-value projects when they have money to spend?
One possibility is that projects and teams to work on them aren't motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, we'd see an increase in supply.
An alternative possibility is that high-quality ideas/teams are rare right now, and can't be had at any price grantmakers are willing or able to pay.
In particular, I think it implies the only relevant type of "demand" is that coming from funders etc., whereas I'd want to frame this in terms of ways the world could be improved.
My position is that "demand" is a word for "what people will pay you for." EA exists for a couple reasons:
Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is "free riding" on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.
Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesn't generate much additional supply. This is the problem we're exploring here.
The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.
I can see how you might interpret it that way. I'm rhetorically comfortable with the phrasing here in the informal context of this blog post. There's a "You can..." implied in the positive statements here (i.e. "You can take 15 years and become a domain expert"). Sticking that into each sentence would add flab.
There is a real question about whether or not the average person (and especially the average non-native English speaker) would understand this. I'm open to argument that one should always be precisely literal in their statements online, to prioritize avoiding confusion over smoothing the prosody.
Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:
They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their own job search mechanisms can identify. In effect, they are outsourcing their headhunting for a particular job category.
Given that EA orgs seem primarily to lack specific forms of domain expertise, as well as well-defined project ideas/teams, what would an EA Triplebyte have to achieve?
They'd need to be able to interface with EA orgs and identify the specific forms of domain expertise that are required. Then they'd need to be able to go out and recruit those experts, who might never have heard of EA, and get them interested in the job. They'd be an interface to the expertise these orgs require. Push a button, get an expert.
That seems plausible. Triplebyte evokes the image of a huge recruiting service meant to fill cubicles with basically-competent programmers who are pre-screened for the in-house technical interview. Not to find unusually specific skills for particular kinds of specialist jobs, which it seems is what EA requires at this time.
That sort of headhunting job could be done by just one person. Their job would be to do a whole lot of cold-calling, getting meetings with important people, doing the legwork that EA orgs don't have time for. Need five minutes of a Senator's time? Looking to pull together a conference of immunologists to discuss biosafety issues from an EA perspective? That's the sort of thing this sort of org would strive to make more convenient for EA orgs.
As they gained experience, they would also be able to help EA orgs anticipate what sort of projects the domain experts they'd depend upon would be likely to spring for. I imagine that some EA orgs must periodically come up with, say, ideas that would require some significant scientific input. Some of those ideas might be more attractive to the scientists than others. If an org like this existed, it might be able to tell those EA orgs which ones the scientists are likely to spring for.
That does seem like the kind of job that could productively exist at the intersection of EA orgs. They'd need to understand EA concepts and the relationships between institutions well enough to speak "on behalf of the movement," while gaining a similar understanding of domains like the scientific, political, business, philanthropic, or military establishment of particular countries.
Great thoughts, ishaan. Thanks for your contributions here. Some of these thoughts connect with MichaelA's comments above. In general, they touch on the question of whether or not there are things we can productively discover or say about the needs of EA orgs and the capabilities of applications that would reduce the size of the "zone of uncertainty."
This is why I tried to convey some of the recent statements by people working at major EA orgs on what they perceive as major bottlenecks in the project pipeline and hiring process.
One key challenge is triangulation. How do we get the right information to the right person? 80000 Hours has solved a piece of this admirably, by making themselves into a go-to resource on thinking through career selection from an EA point of view.
This is a comment section on a modestly popular blog post, which will vanish from view in a few days. What would it take to get the information that people like you, MichaelA, and many others have, compile it into a continually maintained resource, and get it into the hands of the people who need it? Does that knowledge have a shelf life long enough to be worth compiling, yet general enough to be worth broadcasting, and that is EA-specific enough to not be available elsewhere?
I'm primarily interested here in making statements that are durably true. In this case, I believe that EA grantmakers will always need to have a bar, and that as long as we have a compelling message, there will consequently always be some people failing to clear it who are stuck in the "zone of uncertainty."
With this post, I'm not trying to tell them what they should do. Instead, I am trying to articulate a framework for understanding this situation, so that the inchoate frustration that might otherwise result can be (hopefully) transmuted into understanding. I'm very concerned about the people who might feel like "bycatch" of the movement, caught in a net, dragged along, distressed, and not sure what to do.
That kind of situation can produce anger at the powers that be, which is a valid emotion. However, when the "powers that be" are leaders in a small movement that the angry person actually believes in, it could be more productive to at least come to a systemic understanding of the situation that gives context to that emotion. Being in a line that doesn't seem to be moving very fast is frustrating, but it's a very different experience if you feel like the speed at which it's moving is understandable given the circumstances.
Good thoughts. I think this problem decomposes into three factors:
Should there be a bar, or should all EA projects get funded in order of priority until the money runs out?
If there's a bar, where should it be set, and why?
After the bar is set, when should grantmakers re-examine its underlying reasoning to see if it still makes sense under present circumstances?
My argument actively argues that we should have a bar, is agnostic on how high the bar should be, and assumes that the bar is immobile for the purposes of the reader.
At some point, I may give consideration to where and how we set the bar. I think that's an interesting question both for grant makers and people launching projects. A healthy movement would strive for some clarity and consensus. If neophytes could more rapidly gain skill in self-evaluation relative to the standards of the "EA grantmaker's bar," without killing the buzz, it could help them make more confident choices about "looping out and back" or persevering within the movement.
For the purposes of this comment section, though, I'm not ready to develop my stance on it. Hope you'll consider expanding your thoughts in a larger post!
My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.
It just seems to me that Triplebyte is powered by a mature industry that's had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I don't think EA is old or big or wealthy enough to have a sharp sense of exactly what the stable needs are.
For a sense of scale, there are almost 4 million programmers in the USA. Triplebyte launched just 5 years ago. It took millions of people working as programmers to generate adequate demand and capacity for that service to be successful.
All in all, my guess is that what we're missing is charismatic founder-types. The kind of people who can take one of the problems on our long lists of cause areas, turn it into a real plan, pull together funding and a team (of underutilized people), and make it go.
Figuring out how to teach that skill, or replace it with some other foundation mechanism, would of course be great. It's necessary. Otherwise, we're kind of just cannibalizing one highly-capable project to create another. Which is pretty much what we do when we try to attract strong outside talent and "convert" them to EA.
Part of the reason I haven't spent more time trying to found something right off the bat is that I thought EA could benefit more if I developed a skillset in technology. But another reason is that I just don't have the slack. I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.
Most neophytes don't have that kind of slack. That's why I especially lean on the side of "if it hurts, don't do it."
I don't have any negativity toward the encouragement to try things and be audacious. At the same time, there's a massive amount of hype and exploitative stuff in the entrepreneurship world. This "Think of the guy who wrote Winzip! He made millions of dollars, and you can do it too!" line that business gurus use to suck people in to their self-help sites and Youtube channels and so on.
The EA movement had some low-hanging fruit to pick early on. It's obviously a huge win for us to have great resources like 80k, or significant organizations like OpenPhil. Some of these were founded by world-class experts (Pete Singer) and billionaires, but some (80k) were founded by some young audacious people not too far out of grad school. But those needs, it seems to me, are filled. The world's pretty rich. It's easier to address a funding shortfall or an information shortfall, than to get concrete useful direct work done.
Likewise in the business world, it's easier to find money for a project and outline the general principles of how to run a good business, than to actually develop and successfully market a valuable new product. There's plenty of money out there, and not a ton of obvious choices to spend it on. Silicon Valley's looking for unicorns. We're looking for unicorns too. There aren't many unicorns.
I think that the "EA establishment's" responsibility to neophytes is to tell them frankly that there's a very high bar, it's there for a reason, and for your own sake, don't hurt yourself over and over by failing to clear it. Go make yourself big and strong somewhere else, then come back here and show us what you can do. Tell people it's hard, and invite them back when they're ready for that kind of challenge.
Triplebyte's value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:
More predictive of success-linked technical prowess
Convenient (since companies don't have to run the technical interviews themselves)
If there's room for an "EA Triplebyte," that would suggest that EA orgs have at least one of those three problems.
So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.
Are they looking at the same sorts of skills? Are their tests any good? Are the tests time-consuming and burdensome for EA orgs? Alternatively, do many EA orgs pass up on needed hires because they don't have the short-term capacity to evaluate them?
Then you'd need to consider what alternative tests would be a better measurement of technical research prowess, and how to show that they are better predictive of success than present technical interviews.
It would also be important to determine the scale of the problem. Eyeballing this list, there's maybe 75 EA-related organizations. How many hires do they make per month? How often does their search fail for lack of qualified candidates? How many hours do they spend on technical interviews each time? Will you be testing not for EA-specific for general research capacity (massively broadening your market, but also increasing the challenge of addressing all their needs)?
Finally, you'd need to roll that up into a convenient, trustiworthy and reliable package that clients are excited to use instead of their current approach.
This seems like a massive amount of work, demanding a strong team, adequate funding and prior interest by EA orgs, and long-term commitment. It also sounds like it might be really valuable if done well.
Figuring out how to give the right advice to the right person is a hard challenge. That's why I framed skilling up outside EA as being a good alternative to "banging your head against the wall indefinitely." I think the link I added to the bottom of this post addresses the "many paths" component.
The main goal of my post, though, is to talk about why there's a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if you've become convinced that you can't surpass it at this stage in your journey.
It would be helpful to find a test to distinguish EAs who should keep trying from those who should exit, skill up, and return later. Probably one-on-one mentorship, coupled with data on what sorts of things EA orgs look for in an applicant, and the distribution of applicant quality, would be the way to devise such a test.
A team capable of executing a high-quality project to create such a test would (if I were an EA fund) definitely be worthy of a grant!
Hi Michael, thanks for your responses! I'm mainly addressing the metaphorical runner on the right in the photograph at the start of the post.
I am also agnostic about where the bar should be. But having a bar means that you have to maintain the bar in place. You don't move the bar just because you couldn't find a place to spend all your money.
For me, EA has been an activating and liberating force. It gives me a sense of direction, motivation to continue, and practical advice. I've run EA research and community development projects with Vaidehi Agarwalla, and published my own writing here and on LessWrong. These outlets, plus my pursuit of a scientific research career, have been satisfying outlets for my altruistic drive.
Not everything has been successful - but I learned a lot along the way, and feel optimistic about the future.
Yet I see other people who seem very concerned and often disappointed at the difficulty they have in their own relationship with EA. Particularly, getting EA jobs and grants, or dealing with the feeling of "I want to save the world, but I don't know how!" I'm extremely optimistic that EA is and will continue to make an outsize positive impact on the world. What I'm more afraid of is that we'll generate what I call "bycatch."
Just to address point (2), the comments in "EA is vetting-constrained" suggest that EA is not that vetting-constrained:
Denise Melchin of Meta Fund: "My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained.... Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12)."
Claire Zabel of Open Philanthropy: "Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct... Often I feel like it's an inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about"
Jan Kulveit of FHI: "as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts... Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work."
One story is, then, is that EA has successfully eliminated a previous funding bottleneck for high-quality world-saving projects. Now we have a different bottleneck - the supply of high-quality world-saving projects (and people clearly capable of carrying them out).
In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, you'll always have either too much supply, too much demand, or a perception of compacency (where we've matched them up just right, but are disappointed that we haven't scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.
So how do we increase the supply of high-quality world-saving projects? Well, start by factoring projects into components:
A sharp, well-evaluated, timely idea with world-saving potential that also provides the team with enough social reward they're willing to take it on
A proven, generally competent, reliable team of experts who are available to work, committed to that idea, yet able to pivot
Adequate funding both for paying the team and funding their work
Access to outside consulting expertise
In many cases, significant political capital
Viewed from this perspective, it's not surprising at all that increasing the supply of such projects is vastly more difficult than increasing funding. On the other hand, this gives us many opportunities to address this challenge.
Perhaps instead of adding more projects to the list, we need to sharpen up ideas for working on them. Amateur EAs need to spend less time dreaming up novel causes/projects and more time assembling teams and making concrete plans - including for their personal finances. EAs need to spend more time building up networks of experts and government workers outside the EA movement.
I imagine that amateur EAs trying to skill up might need to make some serious sacrifices in order to gain traction. For example, they might focus on building a team to execute a project, but by necessity make the project small, temporary, and cheap. They might need to do a lot of networking and take classes, just to build up general skills and contacts, without having a particular project or idea to work on. They might need to really spend time thinking through the details of plans, without actually intending to execute them.
if I had to guess, here are some things that might benefit newer EAs who are trying to skill up:
Go get an MS in a hard science to gain some skill executing concrete novel projects and working in a rigorous intellectual discipline.
Write a book and get it published, even if it's not on anything related to EA.
Get an administrative volunteer position.
Manage a local non-EA altruistic project to improve their city.
Making therapeutic or life-improving drugs more available
Freeing up tax money for other purposes
Decreasing revenue for terrorists and other bad actors
This seems to be a cause where partial success is meaningful. Every reduction in unnecessary imprisonment, tax dollar saved, and terrorist cell put out of business is a win. We also have some roughly sliding scales - the level of enforcement priority, gradations of legality (research vs medical vs recreational, decriminalization vs legalization), and treatment of offenders (informal social norms vs warnings vs treatment/fines vs jail).
So this suggests to me that neglectedness is relevant in this case. How relevant seems like a detailed question. But given that there's a fair amount of short-term self-interested incentives to legalize drugs, it doesn't seem obvious a priori that this would be a target for EAs relative to, say, animal suffering.
That makes sense. I like your approach of self-diagnosing what sort of resources you lack, then tailoring your PhD to optimize for them.
One challenge with the "work backwards" approach is that it takes quite a bit of time to figure out what problems to solve and how to solve them. As I attempted this planning my own immanent journey into grad school, my views gained a lot of sophistication, and I expect they'll continue to shift as I learn more. So I view grad school partly as a way to pursue the ideas I think are important/good fits, but also as a way to refine those ideas and gain the experience/network/credentials to stay in the game.
The "work backwards" approach is equally applicable to resource-gathering as finding concrete solutions to specific world problems.
I think it's important for career builders to develop gears-level models of how a PhD or tenured academic career gives them resources + freedom to work on the world problems they care about; and also how it compares to other options.
Often, people really don't seem to do that. They go by association: scientists solve important problems, and most of them seem to have PhDs and academic careers, so I guess I should do that too.
But it may be very difficult to put the resources you get from these positions to use in order to solve important problems, without a gears-level model of how those scientists use those resources to do so.
Encouraging PhD students to be more strategic about how they pursue it
Discouraging longtermist EA PhD-holders from going on to pursue a faculty position in a university, thus implying that they should pursue some other sector (perhaps industry, government, or nonprofits)
I also wanted to encourage you to add more specific observations and personal experiences that motivate this advice. What type of grad program are you in now (PhD or master's), and how long have you been in it? Were you as strategic in your approach to your current program as you're recommending to others? What are some specific actions you took that you think others neglect? Why do you think that other sectors outside academia offer a superior incentive structure for longtermist EAs?
This prior should also work for other technologies sharing these reference classes. Examples might include a tech suite amounting to 'longevity escape velocity', mind reading, fully-immersive VR, or highly accurate 10+ year forecasting.
Hi Rob. I can only speak for myself. A lot of people, myself included, discover EA online, because the name or the ideas feel right.
Then we discover there’s a lot of people involved, huge amounts written, and many efforts going on. How do we meet people? How can we contribute? How can you find your place? How do we make sense of all the ideas?
I can only say that nobody is a nobody, and everybody struggles with these questions. It takes time to work it all out, so I advise patience. Write your thoughts out, and make sure to take care of yourself. It sounds like you are in the middle of building up a stable life for yourself, and I believe it’s extremely important for people in EA to focus on that first. Good luck!
I think it can be all of this, and much more. EA can have tremendous capacity for issuing broad recommendations and tailored advice to individual people. It can be about philosophy, governance, technology, and lifestyle.
How could we have a movement for effective altruism if we couldn’t encompass all that?
This is a community, not a think tank, and a movement rather than an institution. It goes beyond any one thing. So to join it or explain it - that’s a little like explaining what America is all about, or Catholicism is all about, or science is all about. You don’t just explain it, you live it, and the journey will look different to different people. That’s a feature, not a bug.
That’s good feedback and a complementary point of view! I wanted to check on this part:
“I think that a thing that this post gets wrong is that EA seems to be particularly prone to generating bycatch, and although there are solutions at the individual level, I'd also appreciate having solutions at higher levels of organization.”
Are you saying that you think EA is not particularly prone to generating bycatch? Or that it is, but it’s a problem that needs higher-level solutions?
I think for me, it might be best to use a straightforward “join us!” pitch.
Most people I know have considered the idea that there are better and worse ways to help the world. But they don’t extend that thinking to realize the implication that there might be a set of best ways. Nor do they have the long-tail of value concept. They also don’t have any emotional impulse pushing them to explore “what’s the best wat to help the world?” Nor do they have any links to the community besides me.
My experience is that most of my friends and family have very limited bandwidth for considering or acting on altruistic ideas. If they do, they have even less bandwidth for thinking critically about effectiveness with an open mind.
So I’m thinking it might be good to try a conversation that goes something like this:
“I’m in the effective altruism movement!”
“We research to figure out the most effective ways to make the world a better place. You should join, it would be awesome to have you!”
“Hm, that sounds cool. But how do you figure something like that out?”
“Oh it’s super interesting. Takes quite a bit of thought of course, but it’s also fun. I can show you if you want?”
“Ok, so what’s a way you want to help the world, maybe by volunteering or donating or something?”
“Um, I donated to a food bank for Chanukah.”
“Great! So here’s how we’d think about that at EA. Basically we want to start by figuring out the principle behind why you picked a food bank. Why’d you donate there?”
“I heard the food banks were running low because of COVID, plus I like to cook.”
“Cool, that makes sense. So partly it fits with your interests, and partly it’s about making sure people have enough to eat?”
“Yeah, pretty much.”
“Gotcha. Ok. So in EA, we focus on the ‘help other people’ part especially, so let’s set aside the fact that you like to cook and focus on the getting food to people part, is that ok?”
“So this might seem like kind of a silly question, but why is it important for people to get enough to eat?”
“So they don’t starve, or go hungry.”
“Right. I mean those things are obviously bad, and we want to think about what exactly is bad about starving, or going hungry?”
“Well, you could die. Or just be really miserable. It makes kids not be able to think straight in school. Plus you might not be able to work and you could end up homeless.”
“Right. So misery, death, and just struggling to be able to keep your life together?”
“Ok. So this is where EA gets into the picture. So first off, EAs think that everybody’s lives matter equally, like a kid in Africa’s life matters just as much as a kid in America. Do you agree with that?”
“Right, I figured! And where do you think people are struggling more with food insecurity, here in our city or in a place like, say, Yemen?”
“Uh, definitely Yemen.”
“And where do you think the money you donated would go further toward buying food, here or in a place like Yemen?”
“Probably also Yemen? Except they have a war going on I think, so maybe it’s hard to get food there?”
“You’re already thinking like an EA! You can already kind of see where this leads, right? We’re trying to think of where to make your donation go farthest, plus make sure it actually accomplishes something. Like, maybe the food pantry in our city is low on food, but maybe there are places where people have nothing to eat at all.”
“Right, right... but the thing is, don’t we have a responsibility to help people here? And plus, how would you, like, figure out where to donate to to help people in Yemen? How do you know the charity actually works?”
“Well basically, I’d start by saying this is a really complicated subject, and I’d be happy to talk it out for as long as you’re interested. It’s one of my favorite topics. But this is why I think it’s really important to join EA. We basically have a whole community of people and nonprofits who are super focused on all this stuff. We think through those thorny questions like whether it’s best to focus on helping people in your own community. Also doing, like, tens of thousands of hours on charities to see which ones really work, which basically nobody was doing before we started the movement. So the point is, if you’re in EA, you don’t have to figure it all out for yourself. Want to join?”
I know it seems silly to frame it as a club that you join, but also... why not?
I think these issues are extremely complex, and I think you bring up a good point, one with underlying values that I agree with. Nevertheless, many of my research interests are in Alzheimer's, chronic severe pain, and life extension. I think that people in poor countries ultimately are going to improve their length and quality of life, and there's a strong trend in that direction already. I am long on Malaria being eradicated within the next 30 years. We mostly know what to do; what's holding us back is a combination of environmental caution and the challenges of culturally sensitive governance.
I'm most concerned with the despair and suffering of the elderly and chronically ill, from a sheer "loss of utility" perspective. These problems are incredibly complex: we still just have one Alhzeimer's drug, and it buys you maybe an extra year. We don't understand how pain works. Most of the utility of the investment in R&D lies at the end of the research process, so the non-neglected nature of these problems is irrelevant from the perspective of utility. Of course, it's quite relevant from the perspective of basic fairness. That's just less of a motivator for me.
Beyond that, I'm sort of an immortalist. I think that the best way to get people to broaden their moral horizons and think long-term is to make them life longer, happier, healthier lives. I honestly do think it's an emergency that even in the industrialized world, life expectancy is only into the late 70s and our declines come with lots of suffering. You spend your best years trying to save up to afford your worst years. Preaching about animals and the poor and our descendents doesn't work on a scale big enough to change the world. The only way I see to change the situation is to dramatically improve the experience of old age and reduce chronic suffering. My intuition is that happy and relaxed people are more compassionate, and that it's fear or the experience of pain and dementia that undermine our happiness and contemplative ability.
Here is that review I mentioned. I'll try and add this post to that summary when I get a chance, though I can't do justice to all the mathematical details.
If you do give it a glance, I'd be curious to hear your thoughts on the critiques regarding the shape and size of the marginal returns graph. It's these concerns that I found most compelling as fundamental critiques of using ITN as more than a rough first-pass heuristic.
The end of this post will be beyond my math til next year, so I’m glad you wrote it :) Have you given thought to the pre-existing critiques of the ITN framework? I’ll link to my review of them later.
In general, ITN should be used as a rough, non-mathematical heuristic. I’m not sure the theory of cause prioritization is developed enough to permit so much mathematical refinement.
In fact, I fear that it gives a sheen of precision to what is truly a rough-hewn communication device. Can you give an example of how an EA organization presently using ITN could improve their analysis by implementing some of the changes and considerations you’re pointing out?
I also hoped to imply that ITN is more than a heuristic. It also serves a rhetorical purpose.
I worry that its seeming simplicity can belie the complexity of cause prioritization. Calculating an ITN rank or score can be treated as the end, rather than the beginning, of such an effort. The numbers can tug the mind in the direction of arguing with the scores, rather than evaluating the argument used to generate them.
My hope is to encourage people to treat ITN scores just as you say - taking them lightly and setting them aside once they've developed a deeper understanding of an issue.
Agreed. However, one of the subcritiques in that point is the divide-by-zero issue that makes issues that have received zero investment "theoretically unsolvable." This is because a % increase in resources from a starting point of 0 will always yield zero. The critic seems to feel it's a result of dividing up the issue in this way.
I leave it to the forum to judge!
Comment by AllAmericanBreakfast on [deleted post]
Can you give a few examples? Having options and avoiding risk are both good things, all else being equal.
There’s a range of posts critiquing ITN from different angles, including many of the ones you specify. I was working on a literature review of these critiques, but stopped in the middle. It seemed to me that organizations that use ITN do so in part because it’s an easy to read communication framework. It boils down an intuitive synthesis of a lot of personal research into something that feels like a metric.
When GiveWell analyzes a charity, they have a carefully specified framework they use to derive a precise cost effectiveness estimate. By contrast, I don’t believe that 80k or OpenPhil have anything comparable for the ITN rankings they assign. Instead, I believe that their scores reflect a deeply researched and well-considered, but essentially intuitive personal opinion.
Comment by AllAmericanBreakfast on [deleted post]
I want to give more context for the MacAskill quote.
The most obvious implication [of the Hinge of History hypothesis], however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.
Here, he is talking about strategies for solving specific problems, X-risks in this case. This is not relevant to the cluelessness argument advanced by Mogensen and that I am addressing. Later in his article, though, he does touch on the topic.
Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment.
Buck-passing, or punting, is compatible with the "debugging" concept, but not with Mogensen's "cluelessness." With debugging, you deliberate as long as is possible or productive, and then act as wisely as possible. Once you've made a decision, you fix side effect problems as they arise, which might include finding ways to reverse the decision where possible. Although some decisions will result in genuine enormous moral disasters, such as slavery or Nazism, this approach appears to me to be both net good and our only choice.
With Mogensen's cluelessness argument, it doesn't matter how long you deliberate, because you have to be able to predict the ripple effects and their moral weights into the far future first. Since that's impossible, you can never know the moral value of an action. We therefore can't morally prefer one action over another. I'm not strawmanning this argument. It really is that extreme.
Buck-passing/punting also not identical to "debugging." In buck-passing or punting, we're deferring a decision on a specific issue to a wiser future. A current ban on genetically engineered human embryos is an example. In debugging, we're making a decision, and trusting the future to resolve the unexpected difficulties. Climate change is an example: our ancestors created fossil fuel-based industry, and we are dealing with the unexpected consequences.
The reason I don't feel the need to engage with the cluelessness literature is because, when sensible, it's simply providing another approach to describing basic problems from economic theory and common sense, which I understand reasonably well and expect I can learn better from those sources. When done badly, it's a salad of sophistry with a thick and unnecessary dressing of formal logic. I can't read everything and I think I'll learn a lot more of value from studying, oh, almost anything else. These writers need to convince me that they've produced insights of value if they want me to engage. I'm just describing why they haven't succeeded in that project so far.
By the way, I appreciate you responding to my post. Although I'm sure you can see I've got little patience for Mogensen and the cluelessness literature I've seen more generally, I think it's important to have conversations about it. And it's always nice to have someone take an interest.
Comment by AllAmericanBreakfast on [deleted post]
Her first example of "complex cluelessness" is the same population size argument made by Morgensen, which I dealt with in section 2a. I think both simple and complex cluelessness are dealt with nicely by the debugging model I am proposing. But I'm not sure it's a valid distinction. I suspect all cluelessness is complex.
Debugging is a form of capacity building, but the distinction I drew is necessary. Sometimes we try to build advance capacity to solve an as-yet-intractable problem, as in AI safety research. This is vulnerable to the cluelessness argument. Even if we are successful in those efforts and manage to solve the problem, we still cannot predict all the precise long-term consequences. Too much moral dark matter remains. This form of capacity-building cannot stand up to Morgensen and Greaves' critique, because it doesn't address the problem they raise.
This debugging model does. Beyond our ability to build capacity to solve specific and known intractable problems, we already and likely always will have capacity to solve problems in general. Unknown unknowns become known, and then we solve them. We keep the good, fix the bad, and develop more wisdom to deal with the ugly.
I'm not planning on engaging further with the cluelessness literature because what I've seen makes me think GPI is off track. It strikes me as a combination of sophistry and obscurantism that I find hard to take seriously. This writing was an attempt to get my own thoughts in order. I invite others who find their ideas more compelling to explain why "debugging," in conjunction with a frank acknowledgement that the future is risky, can't account for cluelessness.
In my OP, I just meant that if the applicant gets in, they can teach. Too many applicants doesn't necessarily indicate that the field is oversubscribed, it just means that there's a mentorship bottleneck. One possible reason is that senior people in the field simply enjoy direct work more than teaching and choose not to focus on it. Insofar as that's the case, candidates are especially suitable if they're willing to focus more on providing mentorship if they get in and a bottleneck remains by the time they become senior.
Thanks for the feedback, it helps me understand that my original post may not have been as clear as I thought.
in the absence of other empirical information, I think it's a safe assumption that present bottlenecks correlate with future bottlenecks, though your first point is well taken.
I'm not quite following your second argument. It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why. Enlighten me?
Your third point is also correct. Stated generally, finding ways to increase the availability of the primary bottlenecked resource, or accomplish the same goal while using less of it, is how we can get the most leverage.
There are already at least three companies in this space: RoomieMatch, Roomi, and Roomster. I wonder why nobody I know uses them, but dating apps are very popular?
It seems to me that the triangulation, trust, and transfer problems in roommate matching that go beyond what OKCupid has to deal with:
There are more than two people involved, and the difficulty of finding communal compatibility complexifies geometrically with the number of roommates.
By the same token, people moving in and out happens more frequently with larger numbers of room mates, often with short notice, making it hard to keep a stable equilibrium of preferences.
Imagine if it was easy to "date your future housemates," perhaps by living together for a month. It's already emotionally painful for people to deal with or inflict rejection in one-on-one dating. Imagine being the "odd man out" in this situation. That sounds like a recipe for really uncomfortable social dynamics.
People who rent because they can't afford their own place probably can't afford a high-touch service. People who have more money could buy their own place and interview enough room mates to make sure everyone is a good fit with them personally.
Land lords often influence or even entirely control the process of finding new room mates. There are also laws around evictions that make it very difficult to kick somebody out if its not working for others, whereas there are no legal barriers to breaking up with someone you're dating if there's no marriage and no kids.
There's a much higher effort and commitment barrier required to move than to go on a date.
This is speculative, but OKCupid's success may stem from capitalizing on a cultural institution that makes romantic love feel of vast importance. By contrast, finding an ideal group of room mates doesn't have the same cultural importance: we still dream of having our own place by ourselves or with our own biological family. To have comparable success, such a service would need to create a new dream. Even if that's your dream, is it the dream of your housemates?
Similarly, the service OKCupid provides may be less in matching people with compatible characteristics, and more in identifying an abundance of single people and getting them hyped to go on a date. The purpose of the "matching" is to trick you into building up anticipation, not to ensure a really good fit (after all, if it did that too well, people wouldn't come back for more!). Instinct, hormones, and love do most of the work of making people stick together in the end.
When people do try and start intentional group houses, they're often organized around a shared social movement, which already have word-of-mouth and social media channels where people can learn about these opportunities for free.
I think a company would do better to work on solving one or more of these problems.
Crossposted from the LW forum
Comment by AllAmericanBreakfast on [deleted post]
It’s just that your first comment sounded a bit like you’re implying that 10% of the population suffers from excruciating kidney stones. With your estimated numbers (10% of population affected at some point in their lives, 2% of cases at 9/10 on the pain scale), it would be more like 0.2%.
That’s probably still a lot if you multiply by the world population and total pain episode lengths. I don’t know how long such a case typically lasts with modern medical care, but plenty of people don’t have access to it.
Of course, this all depends on whether the 2% number is a reasonable estimate, and whether the pain scale is exponential.
But my guess is that a better strategy will probe better medical prevention and treatment of underlying causes in most cases. After all, flooding the USA with powerful painkillers hasn’t exactly been a boon to the nation (see opioids).
I'll give that some thought, but I'm no expert on this. Just pulling together some memories of things I've read and experiences I've had. But my impression is that chronic extreme pain is something that we never adapt to.
A top Google hit for “extinguish coal seam fires” says the gov paid $42 million to relocate Centralians when their early attempts to put it out failed. That suggests to me that they had a much higher estimate than you about the cost of putting it out.
Centralia is in Washington State, where Jay Inslee is the governor. He’s billed himself as the climate change candidate, and has pushed for leading-edge anti-CC policy here. Might we worth really digging into the politics and budget of the state to look for explanations. It might be that he’s informed by environmental lobbying groups like the Sierra Club. If coal seam fires are off their radar, then it might never get seen by state government.
Overall, I’d recommend thinking about cause of neglect both from the standpoint of public bias and institutional chain of transmission.