Posts

Pick causes like you pick restaurants 2021-09-17T04:06:27.327Z
EA is a Career Endpoint 2021-05-14T23:58:37.138Z
Talking With a Biosecurity Professional (Quick Notes) 2021-04-10T04:23:10.056Z
Why I prefer "Effective Altruism" to "Global Priorities" 2021-03-25T18:23:19.182Z
Articles are invitations 2021-03-17T20:28:15.056Z
Don't Be Bycatch 2021-03-10T05:28:28.487Z
For Better Commenting, Avoid PONDS 2021-02-05T00:29:57.391Z
Trade Heroism For Grit. 2020-06-06T19:57:04.733Z
EA lessons from my father 2020-05-10T20:37:51.407Z
Why aren't we talking about personal development? 2020-02-29T20:23:14.240Z
[WIP] Summary Review of ITN Critiques 2019-10-09T08:27:49.403Z
Competition is a sign of neglect in important causes with long time horizons for impact. 2019-08-31T01:42:46.531Z
Peer Support/Study/Networking group for EA math-centric students 2019-07-28T21:47:47.301Z
Math advising interview notes + project ideas (for math-inclined EA career changers) 2019-07-26T19:40:20.406Z
Call for beta-testers for the EA Pen Pals Project! 2019-07-26T19:02:03.422Z
Seeking EAs to Interview on Career Change Resources 2019-07-12T00:57:26.471Z
Open for comment: EA career changer worksheet 2019-07-03T20:05:18.890Z
For older EA-oriented career changers: discussion and community formation 2019-07-01T20:46:00.021Z

Comments

Comment by AllAmericanBreakfast on Call to Vigilance · 2021-09-16T03:49:09.729Z · EA · GW

Do you have an opinion on the second-best venue for people interested in these issues to find community?

Comment by AllAmericanBreakfast on How to succeed as an early-stage researcher: the “lean startup” approach · 2021-09-09T13:34:37.207Z · EA · GW

I asked Cleve about what made him decide that the singular value decomposition, and later MATLAB, were topics worth focusing on. What sources of information did he look to? Was he trying to discern what other people were interested in?

What I took in from his response was that he never picked topics based on the scale of the potential application. For example, he didn't decide to study the mathematics underpinning computer graphics because of the applied importance of computer graphics. He just has a relentless interest in the underlying mathematics, and wants to understand it. What can we learn about the quaternion, a 4x4 matrix that's the workhorse of computer graphics? This understanding of these topics developed bit by bit, through small-scale interactions with other people.

We should treat this sort of account with skepticism, both because it's a subjective assessment of his own history, and because it's a single and unrepresentative example of the outcomes of academic mathematical research. Cleve might have simply lucked into a billion-dollar topic. The fact that we're all asking him about his background is the result of selecting for outcomes, not necessarily for an unusually effective process.

But I think what he was saying was that to find ideas that are likely to nerd snipe somebody else, it's important to use your judgment and try to identify components of a field in an academic sense that are clearly important, and try to understand them better. Having a sense of judgment for the importance of components of a system seems like an important underlying skill for the "lean startup" approach you're describing here.

Comment by AllAmericanBreakfast on How to succeed as an early-stage researcher: the “lean startup” approach · 2021-09-09T12:35:38.535Z · EA · GW

I am sitting in a virtual lecture with Cleve Moler, inventor of MATLAB. He just told us that he produced a 16mm celluloid film to promote the singular value decomposition in 1976. A clip from the film he produced made it into Star Trek, the Motion Picture, in 1979. It's on a screen behind Spock. Point of evidence in favor of the idea that promoting ideas matters in academia. 

Comment by AllAmericanBreakfast on What should we call the other problem of cluelessness? · 2021-07-04T21:29:09.094Z · EA · GW

“Partial” might work instead of “non-absolute,” but I still favor the latter even though it’s bulkier. I like that “non-absolute” points to a challenge that arises when our predictive powers are nonzero, even if they are very slim indeed. By contrast, “partial” feels more aligned with the everyday problem of reasoning under uncertainty.

Comment by AllAmericanBreakfast on What should we call the other problem of cluelessness? · 2021-07-04T21:20:20.962Z · EA · GW

One of the challenges is that “absolute cluelessness” is a precise claim: beyond some threshold of impact scale or time, we can never have any ability to predict the overall moral consequences of any action.

By contrast, the practical problem is not as a precise claim, except perhaps as a denial of “absolute cluelessness.”

After thinking about it for a while, I suggest “problem of non-absolute cluelessness.” After all, isn’t it the idea that we are not clueless about the long term future, and therefore that we have a responsibility to predict and shape it for the good, that is the source of the problem? If we were absolutely clueless, then we would not have that responsibility and would not face that problem.

So I might vote for “absolutely clueless” and “non-absolutely clueless” to describe the state of being, and the “problem of absolute cluelessness” and “problem of non-absolute cluelessness” to describe the respective philosophical problems.

Comment by AllAmericanBreakfast on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-29T01:08:52.214Z · EA · GW

This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.

One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, it's questionable whether humanity can ever gain sufficient control over it to steer toward safe AI. It seems that instead, "AI safety" had to be created as a new field, one that seeks to impose itself on the world of AI research partly from the outside.

It's hard enough to create and grow a network of researchers. To become a researcher at all, you have to be unusually smart and independent-minded, and willing to brave the skepticism of people who don't understand what you do even a fraction as well as you do yourself. You have to know how to plow through to an achievement that will clearly stand out to others as an accomplishment, and persuade them to keep sustaining your funding. That's the sort of person who becomes a scientist. Anybody with those characteristics is a hot commodity.

How do you convince a whole lot of people with that sort of mindset to work toward a new goal? That might be one measure of a "good research product" for a nascent field. If it's good enough to convince more scientists, especially more powerful scientists, that your research question is worth additional money and labor relative to whatever else they could fund or work on, you've succeeded. That's an adversarial contest. After all, you have to fight to get and keep their attention, and then to persuade them. And these are some very intelligent, high-status people. They absolutely have better things to do, and they're at least as bright as you are.

Comment by AllAmericanBreakfast on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-22T19:45:33.630Z · EA · GW

All these projects seem beneficial. I hadn't heard of any of them, so thanks for pointing them out. It's useful to frame this as "research on research," in that it's subject to the same challenges with reproducibility, and with aligning empirical data with theoretical predictions to develop a paradigm, as in any other field of science. Hence, I support the work, while being skeptical of whether such interventions will be useful and potent enough to make a positive change.

The reason I brought this up is that the conversation on improving the productivity of science seems to focus almost exclusively on problems with publishing and reproducibility, while neglecting the skill-building and internal-knowledge aspects of scientific research. Scientists seem to get a feel through their interactions with their colleagues for who is trustworthy and capable, and who is not. Without taking into account the sociology of science, it's hard to know whether measures taken to address problems with publishing and reproducibility will be focusing on the mechanisms by which progress can best be accelerated.

Honest, hardworking academic STEM PIs seem to struggle with money and labor shortages. Why isn't there more money flowing into academic scientific research? Why aren't more people becoming scientists?

The lack of money in STEM academia seems to me a consequence of politics. Why is there political reluctance to fund academic science at higher levels? Is academia to blame for part of this reluctance, or is the reason purely external to academia? I don't know the answers to these questions, but they seem important to address.

Why don't more people strive to become academic STEM scientists? Partly, industry draws them away with better pay. Part of the fault lies in our school system, although I really don't know what exactly we should change. And part of the fault is probably in our cultural attitudes toward STEM.

Many of the pro-reproducibility measures seem to assume that the fastest road to better science is to make more efficient use of what we already have. I would also like to see us figure out a way to produce more labor and capital in this industry. To be clear, I mean that I would like to see fewer people going into non-STEM fields - I am personally comfortable with viewing people's decision to go into many non-STEM fields as a form of failure to achieve their potential. That failure isn't necessarily their fault. It might be the fault of how we've set up our school, governance, cultural or economic system.

Comment by AllAmericanBreakfast on Has anyone found an effective way to scrub indoor CO2? · 2021-06-22T06:39:06.266Z · EA · GW

Indoor CO2 concentrations and cognitive function: A critical review (2020)
 

"In a subset of studies that meet objective criteria for strength and consistency, pure CO2 at a concentration common in indoor environments was only found to affect high-level decision-making measured by the Strategic Management Simulation battery in non-specialized populations, while lower ventilation and accumulation of indoor pollutants, including CO2, could reduce the speed of various functions but leave accuracy unaffected."

I haven't been especially impressed by claims that normal indoor CO2 levels are impairing cognitive function to any extent worth worrying about. Crack a window, I guess?

Comment by AllAmericanBreakfast on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-22T06:36:00.841Z · EA · GW

it could be a lot more valuable if reporting were more rigorous and transparent

Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?

Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry?

No, the purpose of publishing is not mainly to communicate to the public. After all, very few members of the public read scientific literature. The truth-seeking or engineering achievement the lab is aiming for is one thing. The experiments they run to get closer are another. And the descriptions of those experiments are a third thing. That third thing is what you get from the paper.

I find it useful at this early stage in my career because it helps me find labs doing work that's of interest to me. Grantmakers and universities find them useful to decide who to give money to or who to hire. Publications show your work in a way that a letter of reference or a line on a resume just can't. Fellow researchers find them useful to see who's trying what approach to the phenomena of interest. Sometimes, an experiment and its writeup are so persuasive that they actually persuade somebody that the universe works differently than they'd thought.

As you read more literature and speak with more scientists, you start to develop more of a sense of skepticism and of importance. What is the paper choosing to highlight, and what is it leaving out? Is the justification for this research really compelling, or is this just a hasty grab at a publication? Should I be impressed by this result?

It would be nice for the reader if papers were a crystal-clear guide for a novice to the field. Instead, you need a decent amount of sophistication with the field to know what to make of it all. Conversations with researchers can help a lot. Read their work and then ask if you can have 20 minutes of their time; they'll often be happy to answer your questions.

And yes, fields do seem to go down dead ends from time to time. My guess is it's some sort of self-reinforcing selection for biased, corrupt, gullible scientists who've come to depend on a cycle of hype-building to get the next grant. Homophilia attracts more people of the same stripe, and the field gets confused.

Tissue engineering is an example. 20-30 years ago, the scientists in that field hyped up the idea that we were chugging toward tissue-engineered solid organs. Didn't pan out, at least not yet. And when I look at tissue engineering papers today, I fear the same thing might repeat itself. Now we have bioprinters and iPSCs to amuse ourselves with. On the other hand, maybe that'll be enough to do the trick? Hard to know. Keep your skeptical hat on.

Comment by AllAmericanBreakfast on Why scientific research is less effective in producing value than it could be: a mapping · 2021-06-19T19:57:26.010Z · EA · GW

My experience talking with scientists and reading science in the regenerative medicine field has shifted my opinion against this critique somewhat. Published papers are not the fundamental unit of science. Most labs are 2 years ahead of whatever they’ve published. There’s a lot of knowledge within the team that is not in the papers they put out.

Developing a field is a process of investment not in creating papers, but in creating skilled workers using a new array of developing technologies and techniques. The paper is a way of stimulating conversation and a loose measure of that productivity. But just because the papers aren’t good doesn’t mean there’s no useful learning going on, or that science is progressing in a wasteful manner. It’s just less legible to the public.

For example, I read and discussed with the authors a paper on a bioprinting experiment. They produced a one centimeter cube of human tissue via extrusion bioprinting. The materials and methods aren’t rigorously controllable enough for reproducibility. They use decellularized pig hearts from the local butcher (what’s it been eating, what were its genetics, how was it raised?), and an involved manual process to process and extrude the materials.

Several scientists in the field have cautioned me against assuming that figures in published data are reproducible. Yet does that mean the field is worthless? Not at all. New bioprinting methods continue to be developed. The limits of achievement continue to expand. Humanity is developing a cadre of bioengineers who know how to work with this stuff and sometimes go on to found companies with their refined techniques.

It’s the ability to create skilled workers in new manufacturing and measurement techniques, skilled thinkers in some line of theory, that is an important product of science. Reproducibility is important, but that’s what you get after a lot of preliminary work to figure out how to work with the materials and equipment and ideas.

Comment by AllAmericanBreakfast on What's wrong with the EA-aligned research pipeline? · 2021-06-05T15:57:38.449Z · EA · GW

Looking forward to hearing about those vetting constraints! Thanks for keeping the conversation going :)

Comment by AllAmericanBreakfast on Help me find the crux between EA/XR and Progress Studies · 2021-06-04T18:15:23.556Z · EA · GW

Imagine we can divide up the global economy into natural clusters. We'll refer to each cluster as a "Global Project." Each Global Project consists of people and their ideas, material resources, institutional governance, money, incentive structures, and perhaps other factors.

Some Global Projects seem "bad" on the whole. They might have directly harmful goals, irresponsible risk management, poor governance, or many other failings. Others seem "good" on net. This is not in terms of expected value for the world, but in terms of the intrinsic properties of the GP that will produce that value.

It might be reasonable to assume that Global Project quality is normally distributed. One point of possible difference is the center of that distribution. Are most Global Projects of bad quality, neutral, or good quality?

We might make a further assumption that the expected value of a Global Project follows a power law, such that projects of extremely low or high quality produce exponentially more value (or more harm). Perhaps, if Q is project quality and V is value, . But we might disagree on the details of this power law.

One possibility is that in fact, it's easier to destroy the world than to improve the world. We might model this with two power laws, one for Q > 0 and one for Q < 0, like so:

  • , Q >= 0
  • , Q < 0

In this case, whether or not progress is good will depend on the details of our assumptions about both the project quality distribution and the power law for expected value:

  • The size of N, and whether or not the power law is uniform or differs for projects of various qualities. Intuitively, "is it easier for a powerful project to improve or destroy the world, and how much easier?"
  • How many standard deviations away from zero the project quality distribution is centered, and in which direction. Intuitively, "are most projects good or bad, and how much?"

In this case, whether or not average expected value across many simulations of such a model is positive or negative can hinge on small alterations of the variables. For example, if we set N = 7 for bad projects and N = 3 for good projects, but we assume that the average project quality is +0.6 standard deviations from zero, then average expected value is mildly negative. At project quality +0.7 standard deviations from zero, the average expected value is mildly positive.

Here's what an X-risk "we should slow down" perspective might look like. Each plotted point is a simulated "world." In this case, the simulation produces negative average EV across simulated worlds.

And here is a Progress Studies "we should speed up" perspective might look like, with positive average EV.

The joke is that it's really hard to tell these two simulations apart. In fact, I generated the second graph by altering the center point of the project quality distribution 0.01 standard deviations to the right relative to the first graph. In both case, a lot of the expected value is lost to a few worlds in which things go cataclysmically wrong.

One way to approach a double crux would be for adherents of the two sides to specify, in the spirit of "if it's worth doing, it's worth doing with made up statistics," their assumptions about the power law and project quality distribution, then argue about that. Realistically, though, I think both sides understand that we don't have any realistic way of saying what those numbers ought to be. Since the details matter on this question, it seems to me that it would be valuable to find common ground.

For example, I'm sure that PS advocates would agree that there are some targeted risk-reduction efforts that might be good investments, along with a larger class of progress-stimulating interventions. Likewise, I'm sure that XR advocates would agree that there are some targeted tech-stimulus projects that might be X-risk "security factors." Maybe the conversation doesn't need to be about whether "more progress" or "less progress" is desirable, but about the technical details of how we can manage risk while stimulating growth.

Comment by AllAmericanBreakfast on What's wrong with the EA-aligned research pipeline? · 2021-05-20T06:40:35.564Z · EA · GW

Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don't feel they have room to grow in terms of determining the expected value of the projects they're looking at. Very prepared to change my mind on this; I'm literally just going from the quotes in the context of the post to which they were responding.

Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they've been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.

Comment by AllAmericanBreakfast on What's wrong with the EA-aligned research pipeline? · 2021-05-19T21:48:14.066Z · EA · GW

Your previous comment seemed to me to focus on demand and supply and note that they'll pretty much always not be in perfect equilibrium, and say "None of those problems indicate that something is wrong", without noting that the thing that's wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.

In the context of the EA forum, I don't think it's necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let's say in a given year):

  1. Grantmakers run out of money and aren't able to fund all high-quality EA projects.
  2. Grantmakers have extra money, and don't have enough high-quality EA projects to spend it on.
  3. Grantmakers have exactly enough money to fund all high-quality EA projects.

None of these situations indicate that something is wrong with the definition of "high quality EA project" that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.

No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck they're facing.

For the rest, I'd say that there's a difference between "willingness to work" and "likelihood of success." We're interested in the reasons for EA project supply inelasticity. Why aren't grantmakers finding high-expected-value projects when they have money to spend?

One possibility is that projects and teams to work on them aren't motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, we'd see an increase in supply.

An alternative possibility is that high-quality ideas/teams are rare right now, and can't be had at any price grantmakers are willing or able to pay.

Comment by AllAmericanBreakfast on What's wrong with the EA-aligned research pipeline? · 2021-05-19T16:56:24.853Z · EA · GW

In particular, I think it implies the only relevant type of "demand" is that coming from funders etc., whereas I'd want to frame this in terms of ways the world could be improved.

My position is that "demand" is a word for "what people will pay you for." EA exists for a couple reasons:

  1. Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is "free riding" on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.

    Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
  2. Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesn't generate much additional supply. This is the problem we're exploring here.

The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-19T07:39:09.832Z · EA · GW

I can see how you might interpret it that way. I'm rhetorically comfortable with the phrasing here in the informal context of this blog post. There's a "You can..." implied in the positive statements here (i.e. "You can take 15 years and become a domain expert"). Sticking that into each sentence would add flab.

There is a real question about whether or not the average person (and especially the average non-native English speaker) would understand this. I'm open to argument that one should always be precisely literal in their statements online, to prioritize avoiding confusion over smoothing the prosody.

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-19T04:26:47.286Z · EA · GW

Thanks for that context, John. Given that value prop, companies might use a TB-like service under two constraints:

  1. They are bottlenecked by having too few applicants. In this case, they have excess interviewing capacity, or more jobs than applicants. They hope that by investigating more applicants through TB, they can find someone outstanding.
  2. Their internal headhunting process has an inferior quality distribution relative to the candidates they get through TB. In this case, they believe that TB can provide them with a better class of applicants than their own job search mechanisms can identify. In effect, they are outsourcing their headhunting for a particular job category.

Given that EA orgs seem primarily to lack specific forms of domain expertise, as well as well-defined project ideas/teams, what would an EA Triplebyte have to achieve?

They'd need to be able to interface with EA orgs and identify the specific forms of domain expertise that are required. Then they'd need to be able to go out and recruit those experts, who might never have heard of EA, and get them interested in the job. They'd be an interface to the expertise these orgs require. Push a button, get an expert.

That seems plausible. Triplebyte evokes the image of a huge recruiting service meant to fill cubicles with basically-competent programmers who are pre-screened for the in-house technical interview. Not to find unusually specific skills for particular kinds of specialist jobs, which it seems is what EA requires at this time.

That sort of headhunting job could be done by just one person. Their job would be to do a whole lot of cold-calling, getting meetings with important people, doing the legwork that EA orgs don't have time for. Need five minutes of a Senator's time? Looking to pull together a conference of immunologists to discuss biosafety issues from an EA perspective? That's the sort of thing this sort of org would strive to make more convenient for EA orgs.

As they gained experience, they would also be able to help EA orgs anticipate what sort of projects the domain experts they'd depend upon would be likely to spring for. I imagine that some EA orgs must periodically come up with, say, ideas that would require some significant scientific input. Some of those ideas might be more attractive to the scientists than others. If an org like this existed, it might be able to tell those EA orgs which ones the scientists are likely to spring for.

That does seem like the kind of job that could productively exist at the intersection of EA orgs. They'd need to understand EA concepts and the relationships between institutions well enough to speak "on behalf of the movement," while gaining a similar understanding of domains like the scientific, political, business, philanthropic, or military establishment of particular countries.

An EA diplomat.

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-19T04:06:05.636Z · EA · GW

Great thoughts, ishaan. Thanks for your contributions here. Some of these thoughts connect with MichaelA's comments above. In general, they touch on the question of whether or not there are things we can productively discover or say about the needs of EA orgs and the capabilities of applications that would reduce the size of the "zone of uncertainty."

This is why I tried to convey some of the recent statements by people working at major EA orgs on what they perceive as major bottlenecks in the project pipeline and hiring process.

One key challenge is triangulation.  How do we get the right information to the right person? 80000 Hours has solved a piece of this admirably, by making themselves into a go-to resource on thinking through career selection from an EA point of view.

This is a comment section on a modestly popular blog post, which will vanish from view in a few days. What would it take to get the information that people like you, MichaelA, and many others have, compile it into a continually maintained resource, and get it into the hands of the people who need it? Does that knowledge have a shelf life long enough to be worth compiling, yet general enough to be worth broadcasting, and that is EA-specific enough to not be available elsewhere?

I'm primarily interested here in making statements that are durably true. In this case, I believe that EA grantmakers will always need to have a bar, and that as long as we have a compelling message, there will consequently always be some people failing to clear it who are stuck in the "zone of uncertainty."

With this post, I'm not trying to tell them what they should do. Instead, I am trying to articulate a framework for understanding this situation, so that the inchoate frustration that might otherwise result can be (hopefully) transmuted into understanding. I'm very concerned about the people who might feel like "bycatch" of the movement, caught in a net, dragged along, distressed, and not sure what to do.

That kind of situation can produce anger at the powers that be, which is a valid emotion. However, when the "powers that be" are leaders in a small movement that the angry person actually believes in, it could be more productive to at least come to a systemic understanding of the situation that gives context to that emotion. Being in a line that doesn't seem to be moving very fast is frustrating, but it's a very different experience if you feel like the speed at which it's moving is understandable given the circumstances.

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-19T03:44:39.278Z · EA · GW

Good thoughts. I think this problem decomposes into three factors:

  1. Should there be a bar, or should all EA projects get funded in order of priority until the money runs out?
  2. If there's a bar, where should it be set, and why?
  3. After the bar is set, when should grantmakers re-examine its underlying reasoning to see if it still makes sense under present circumstances?

My argument actively argues that we should have a bar, is agnostic on how high the bar should be, and assumes that the bar is immobile for the purposes of the reader.

At some point, I may give consideration to where and how we set the bar. I think that's an interesting question both for grant makers and people launching projects. A healthy movement would strive for some clarity and consensus. If neophytes could more rapidly gain skill in self-evaluation relative to the standards of the "EA grantmaker's bar," without killing the buzz, it could help them make more confident choices about "looping out and back" or persevering within the movement.

For the purposes of this comment section, though, I'm not ready to develop my stance on it. Hope you'll consider expanding your thoughts in a larger post!

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-15T23:33:56.152Z · EA · GW

I agree, I should have included "or a safe career/fallback option" to that.

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-15T21:58:19.696Z · EA · GW

My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.

It just seems to me that Triplebyte is powered by a mature industry that's had decades of time and massive amounts of money invested into articulating its own needs and interests. Whereas I don't think EA is old or big or wealthy enough to have a sharp sense of exactly what the stable needs are.

For a sense of scale, there are almost 4 million programmers in the USA. Triplebyte launched just 5 years ago. It took millions of people working as programmers to generate adequate demand and capacity for that service to be successful.

All in all, my guess is that what we're missing is charismatic founder-types. The kind of people who can take one of the problems on our long lists of cause areas, turn it into a real plan, pull together funding and a team (of underutilized people), and make it go.

Figuring out how to teach that skill, or replace it with some other foundation mechanism, would of course be great. It's necessary. Otherwise, we're kind of just cannibalizing one highly-capable project to create another. Which is pretty much what we do when we try to attract strong outside talent and "convert" them to EA.

Part of the reason I haven't spent more time trying to found something right off the bat is that I thought EA could benefit more if I developed a skillset in technology. But another reason is that I just don't have the slack. I think to found something, you need significant savings and a clear sense of what to do if it fails, such that you can afford to take years of your life, potentially, without a real income.

Most neophytes don't have that kind of slack. That's why I especially lean on the side of "if it hurts, don't do it."

I don't have any negativity toward the encouragement to try things and be audacious. At the same time, there's a massive amount of hype and exploitative stuff in the entrepreneurship world. This "Think of the guy who wrote Winzip! He made millions of dollars, and you can do it too!" line that business gurus use to suck people in to their self-help sites and Youtube channels and so on.

The EA movement had some low-hanging fruit to pick early on. It's obviously a huge win for us to have great resources like 80k, or significant organizations like OpenPhil. Some of these were founded by world-class experts (Pete Singer) and billionaires, but some (80k) were founded by some young audacious people not too far out of grad school. But those needs, it seems to me, are filled. The world's pretty rich. It's easier to address a funding shortfall or an information shortfall, than to get concrete useful direct work done.

Likewise in the business world, it's easier to find money for a project and outline the general principles of how to run a good business, than to actually develop and successfully market a valuable new product. There's plenty of money out there, and not a ton of obvious choices to spend it on. Silicon Valley's looking for unicorns. We're looking for unicorns too. There aren't many unicorns.

I think that the "EA establishment's" responsibility to neophytes is to tell them frankly that there's a very high bar, it's there for a reason, and for your own sake, don't hurt yourself over and over by failing to clear it. Go make yourself big and strong somewhere else, then come back here and show us what you can do. Tell people it's hard, and invite them back when they're ready for that kind of challenge.

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-15T19:28:48.471Z · EA · GW

Triplebyte's value proposition to its clients (the companies who pay for its services) is an improved technical interview process. They claim to offer tests that achieve three forms of value:

  1. Less biased
  2. More predictive of success-linked technical prowess
  3. Convenient (since companies don't have to run the technical interviews themselves)

If there's room for an "EA Triplebyte," that would suggest that EA orgs have at least one of those three problems.

So it seems like your first step would be to look in-depth at the ways EA orgs assess technical research skills.

Are they looking at the same sorts of skills? Are their tests any good? Are the tests time-consuming and burdensome for EA orgs? Alternatively, do many EA orgs pass up on needed hires because they don't have the short-term capacity to evaluate them?

Then you'd need to consider what alternative tests would be a better measurement of technical research prowess, and how to show that they are better predictive of success than present technical interviews.

It would also be important to determine the scale of the problem. Eyeballing this list, there's maybe 75 EA-related organizations. How many hires do they make per month? How often does their search fail for lack of qualified candidates? How many hours do they spend on technical interviews each time? Will you be testing not for EA-specific for general research capacity (massively broadening your market, but also increasing the challenge of addressing all their needs)?

Finally, you'd need to roll that up into a convenient, trustiworthy and reliable package that clients are excited to use instead of their current approach.

This seems like a massive amount of work, demanding a strong team, adequate funding and prior interest by EA orgs, and long-term commitment. It also sounds like it might be really valuable if done well.

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-15T18:34:27.866Z · EA · GW

Figuring out how to give the right advice to the right person is a hard challenge. That's why I framed skilling up outside EA as being a good alternative to "banging your head against the wall indefinitely." I think the link I added to the bottom of this post addresses the "many paths" component.

The main goal of my post, though, is to talk about why there's a bar (hurdle rate) in the first place. And, if readers are persuaded of its necessity, to suggest what to do if you've become convinced that you can't surpass it at this stage in your journey.

It would be helpful to find a test to distinguish EAs who should keep trying from those who should exit, skill up, and return later. Probably one-on-one mentorship, coupled with data on what sorts of things EA orgs look for in an applicant, and the distribution of applicant quality, would be the way to devise such a test.

A team capable of executing a high-quality project to create such a test would (if I were an EA fund) definitely be worthy of a grant!

Comment by AllAmericanBreakfast on EA is a Career Endpoint · 2021-05-15T17:23:37.477Z · EA · GW

Hi Michael, thanks for your responses! I'm mainly addressing the metaphorical runner on the right in the photograph at the start of the post.

I am also agnostic about where the bar should be. But having a bar means that you have to maintain the bar in place. You don't move the bar just because you couldn't find a place to spend all your money.

For me, EA has been an activating and liberating force. It gives me a sense of direction, motivation to continue, and practical advice. I've run EA research and community development projects with Vaidehi Agarwalla, and published my own writing here and on LessWrong. These outlets, plus my pursuit of a scientific research career, have been satisfying outlets for my altruistic drive.

Not everything has been successful - but I learned a lot along the way, and feel optimistic about the future.

Yet I see other people who seem very concerned and often disappointed at the difficulty they have in their own relationship with EA. Particularly, getting EA jobs and grants, or dealing with the feeling of "I want to save the world, but I don't know how!" I'm extremely optimistic that EA is and will continue to make an outsize positive impact on the world. What I'm more afraid of is that we'll generate what I call "bycatch."

Comment by AllAmericanBreakfast on What's wrong with the EA-aligned research pipeline? · 2021-05-14T20:51:08.052Z · EA · GW

Just to address point (2), the comments in "EA is vetting-constrained" suggest that EA is not that vetting-constrained:

  • Denise Melchin of Meta Fund: "My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained.... Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12)."
  • Claire Zabel of Open Philanthropy: "Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct... Often I feel like it's an inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about"
  • Jan Kulveit of FHI: "as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts... Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work."

One story is, then, is that EA has successfully eliminated a previous funding bottleneck for high-quality world-saving projects. Now we have a different bottleneck - the supply of high-quality world-saving projects (and people clearly capable of carrying them out).

In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, you'll always have either too much supply, too much demand, or a perception of compacency (where we've matched them up just right, but are disappointed that we haven't scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.

So how do we increase the supply of high-quality world-saving projects? Well, start by factoring projects into components:

  • A sharp, well-evaluated, timely idea with world-saving potential that also provides the team with enough social reward they're willing to take it on
  • A proven, generally competent, reliable team of experts who are available to work, committed to that idea, yet able to pivot
  • Adequate funding both for paying the team and funding their work
  • Access to outside consulting expertise
  • In many cases, significant political capital

Viewed from this perspective, it's not surprising at all that increasing the supply of such projects is vastly more difficult than increasing funding. On the other hand, this gives us many opportunities to address this challenge.

Perhaps instead of adding more projects to the list, we need to sharpen up ideas for working on them. Amateur EAs need to spend less time dreaming up novel causes/projects and more time assembling teams and making concrete plans - including for their personal finances. EAs need to spend more time building up networks of experts and government workers outside the EA movement.

I imagine that amateur EAs trying to skill up might need to make some serious sacrifices in order to gain traction. For example, they might focus on building a team to execute a project, but by necessity make the project small, temporary, and cheap. They might need to do a lot of networking and take classes, just to build up general skills and contacts, without having a particular project or idea to work on. They might need to really spend time thinking through the details of plans, without actually intending to execute them.

if I had to guess, here are some things that might benefit newer EAs who are trying to skill up:

  • Go get an MS in a hard science to gain some skill executing concrete novel projects and working in a rigorous intellectual discipline.
  • Write a book and get it published, even if it's not on anything related to EA.
  • Get an administrative volunteer position.
  • Manage a local non-EA altruistic project to improve their city.
  • Volunteer on some political campaigns.
Comment by AllAmericanBreakfast on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-14T03:42:49.571Z · EA · GW

Here's a list of critiques of the ITN framework many of which involve critiques of the neglectedness criterion.

Ending the war on drugs has a few obvious goods:

  1. Making therapeutic or life-improving drugs more available
  2. Freeing up tax money for other purposes
  3. Decreasing punishment
  4. Decreasing revenue for terrorists and other bad actors

This seems to be a cause where partial success is meaningful. Every reduction in unnecessary imprisonment, tax dollar saved, and terrorist cell put out of business is a win. We also have some roughly sliding scales - the level of enforcement priority, gradations of legality (research vs medical vs recreational, decriminalization vs legalization), and treatment of offenders (informal social norms vs warnings vs treatment/fines vs jail).

So this suggests to me that neglectedness is relevant in this case. How relevant seems like a detailed question. But given that there's a fair amount of short-term self-interested incentives to legalize drugs, it doesn't seem obvious a priori that this would be a target for EAs relative to, say, animal suffering.

Comment by AllAmericanBreakfast on Concerns with ACE's Recent Behavior · 2021-04-17T16:53:56.901Z · EA · GW

Those are the circles many of us exist in. So a more precise rephrasing might be “we want to stay in touch with the political culture of our peers beyond EA.”

This could be important for epistemic reasons. Antagonistic relationships make it hard to gather information when things are wrong internally.

Of course, PR-based deference is also a form of antagonistic relationship. What would a healthy yet independent relationship between EA and the social justice movement look like?

Comment by AllAmericanBreakfast on How to PhD · 2021-03-30T17:23:51.914Z · EA · GW

That makes sense. I like your approach of self-diagnosing what sort of resources you lack, then  tailoring your PhD to optimize for them.

One challenge with the "work backwards" approach is that it takes quite a bit of time to figure out what problems to solve and how to solve them. As I attempted this planning my own immanent journey into grad school, my views gained a lot of sophistication, and I expect they'll continue to shift as I learn more. So I view grad school partly as a way to pursue the ideas I think are important/good fits, but also as a way to refine those ideas and gain the experience/network/credentials to stay in the game.

The "work backwards" approach is equally applicable to resource-gathering as finding concrete solutions to specific world problems.

I think it's important for career builders to develop gears-level models of how a PhD or tenured academic career gives them resources + freedom to work on the world problems they care about; and also how it compares to other options.

Often, people really don't seem to do that. They go by association: scientists solve important problems, and most of them seem to have PhDs and academic careers, so I guess I should do that too.

But it may be very difficult to put the resources you get from these positions to use in order to solve important problems, without a gears-level model of how those scientists use those resources to do so.

Comment by AllAmericanBreakfast on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-29T01:14:28.692Z · EA · GW

This is great, I’ll put a note in the main post highlighting this when I get home.

Comment by AllAmericanBreakfast on How to PhD · 2021-03-28T23:26:26.687Z · EA · GW

Just to clarify, it sounds like you are:

  1. Encouraging PhD students to be more strategic about how they pursue it
  2. Discouraging longtermist EA PhD-holders from going on to pursue a faculty position in a university, thus implying that they should pursue some other sector (perhaps industry, government, or nonprofits)

I also wanted to encourage you to add more specific observations and personal experiences that motivate this advice. What type of grad program are you in now (PhD or master's), and how long have you been in it? Were you as strategic in your approach to your current program as you're recommending to others? What are some specific actions you took that you think others neglect? Why do you think that other sectors outside academia offer a superior incentive structure for longtermist EAs?

Comment by AllAmericanBreakfast on Report on Semi-informative Priors for AI timelines (Open Philanthropy) · 2021-03-27T21:53:36.833Z · EA · GW

This prior should also work for other technologies sharing these reference classes. Examples might include a tech suite amounting to 'longevity escape velocity', mind reading, fully-immersive VR, or highly accurate 10+ year forecasting.

Comment by AllAmericanBreakfast on Can you turn me into an effective altruist and do you want to? · 2021-03-26T22:30:40.725Z · EA · GW

Hi Rob. I can only speak for myself. A lot of people, myself included, discover EA online, because the name or the ideas feel right.

Then we discover there’s a lot of people involved, huge amounts written, and many efforts going on. How do we meet people? How can we contribute? How can you find your place? How do we make sense of all the ideas?

I can only say that nobody is a nobody, and everybody struggles with these questions. It takes time to work it all out, so I advise patience. Write your thoughts out, and make sure to take care of yourself. It sounds like you are in the middle of building up a stable life for yourself, and I believe it’s extremely important for people in EA to focus on that first. Good luck!

Comment by AllAmericanBreakfast on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T16:52:52.210Z · EA · GW

Hi Jonas. On taking a second look, the sentence that clinched me interpreting your argument as being for a name change from EA to GP (or something else) was:

“ I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community")”

I will make a note that you aren’t advocating a name change. You may want to consider making this clearer in your post as well :)

Comment by AllAmericanBreakfast on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T04:13:42.925Z · EA · GW

I think it can be all of this, and much more. EA can have tremendous capacity for issuing broad recommendations and tailored advice to individual people. It can be about philosophy, governance, technology, and lifestyle.

How could we have a movement for effective altruism if we couldn’t encompass all that?

This is a community, not a think tank, and a movement rather than an institution. It goes beyond any one thing. So to join it or explain it - that’s a little like explaining what America is all about, or Catholicism is all about, or science is all about. You don’t just explain it, you live it, and the journey will look different to different people. That’s a feature, not a bug.

Comment by AllAmericanBreakfast on Strong Evidence is Common · 2021-03-14T06:19:38.267Z · EA · GW

I didn’t say anything about what size/duration of returns would make you a top 1% trader.

Comment by AllAmericanBreakfast on Don't Be Bycatch · 2021-03-12T15:50:39.838Z · EA · GW

That’s good feedback and a complementary point of view! I wanted to check on this part:

“I think that a thing that this post gets wrong is that EA seems to be particularly prone to generating bycatch, and although there are solutions at the individual level, I'd also appreciate having solutions at higher levels of organization.”

Are you saying that you think EA is not particularly prone to generating bycatch? Or that it is, but it’s a problem that needs higher-level solutions?

Comment by AllAmericanBreakfast on For Better Commenting, Avoid PONDS · 2021-02-06T23:56:39.000Z · EA · GW

Did I get them all? :D 

 

So close, yet so far! By ending your comment with a question and a smiley face, you missed "disengaged" and "prickly"! But keep trying, I know you've got this in you :P

Comment by AllAmericanBreakfast on A framework for discussing EA with people outside the community · 2021-02-03T07:33:06.622Z · EA · GW

I think for me, it might be best to use a straightforward “join us!” pitch.

Most people I know have considered the idea that there are better and worse ways to help the world. But they don’t extend that thinking to realize the implication that there might be a set of best ways. Nor do they have the long-tail of value concept. They also don’t have any emotional impulse pushing them to explore “what’s the best wat to help the world?” Nor do they have any links to the community besides me.

My experience is that most of my friends and family have very limited bandwidth for considering or acting on altruistic ideas. If they do, they have even less bandwidth for thinking critically about effectiveness with an open mind.

So I’m thinking it might be good to try a conversation that goes something like this:

“I’m in the effective altruism movement!”

“What’s that?”

“We research to figure out the most effective ways to make the world a better place. You should join, it would be awesome to have you!”

“Hm, that sounds cool. But how do you figure something like that out?”

“Oh it’s super interesting. Takes quite a bit of thought of course, but it’s also fun. I can show you if you want?”

“Sure....”

“Ok, so what’s a way you want to help the world, maybe by volunteering or donating or something?”

“Um, I donated to a food bank for Chanukah.”

“Great! So here’s how we’d think about that at EA. Basically we want to start by figuring out the principle behind why you picked a food bank. Why’d you donate there?”

“I heard the food banks were running low because of COVID, plus I like to cook.”

“Cool, that makes sense. So partly it fits with your interests, and partly it’s about making sure people have enough to eat?”

“Yeah, pretty much.”

“Gotcha. Ok. So in EA, we focus on the ‘help other people’ part especially, so let’s set aside the fact that you like to cook and focus on the getting food to people part, is that ok?”

“Yeah.”

“So this might seem like kind of a silly question, but why is it important for people to get enough to eat?”

“So they don’t starve, or go hungry.”

“Right. I mean those things are obviously bad, and we want to think about what exactly is bad about starving, or going hungry?”

“Well, you could die. Or just be really miserable. It makes kids not be able to think straight in school. Plus you might not be able to work and you could end up homeless.”

“Right. So misery, death, and just struggling to be able to keep your life together?”

“Yeah.”

“Ok. So this is where EA gets into the picture. So first off, EAs think that everybody’s lives matter equally, like a kid in Africa’s life matters just as much as a kid in America. Do you agree with that?”

“Definitely!”

“Right, I figured! And where do you think people are struggling more with food insecurity, here in our city or in a place like, say, Yemen?”

“Uh, definitely Yemen.”

“And where do you think the money you donated would go further toward buying food, here or in a place like Yemen?”

“Probably also Yemen? Except they have a war going on I think, so maybe it’s hard to get food there?”

“You’re already thinking like an EA! You can already kind of see where this leads, right? We’re trying to think of where to make your donation go farthest, plus make sure it actually accomplishes something. Like, maybe the food pantry in our city is low on food, but maybe there are places where people have nothing to eat at all.”

“Right, right... but the thing is, don’t we have a responsibility to help people here? And plus, how would you, like, figure out where to donate to to help people in Yemen? How do you know the charity actually works?”

“Well basically, I’d start by saying this is a really complicated subject, and I’d be happy to talk it out for as long as you’re interested. It’s one of my favorite topics. But this is why I think it’s really important to join EA. We basically have a whole community of people and nonprofits who are super focused on all this stuff. We think through those thorny questions like whether it’s best to focus on helping people in your own community. Also doing, like, tens of thousands of hours on charities to see which ones really work, which basically nobody was doing before we started the movement. So the point is, if you’re in EA, you don’t have to figure it all out for yourself. Want to join?”

I know it seems silly to frame it as a club that you join, but also... why not?

Comment by AllAmericanBreakfast on Call for beta-testers for the EA Pen Pals Project! · 2020-07-13T20:59:39.075Z · EA · GW

Update: We were unsuccessful in seeking funding to automate this project, and for the time being we do not have capacity to maintain it manually. The project is closed.

Comment by AllAmericanBreakfast on EA lessons from my father · 2020-05-11T18:28:18.415Z · EA · GW

I think these issues are extremely complex, and I think you bring up a good point, one with underlying values that I agree with. Nevertheless, many of my research interests are in Alzheimer's, chronic severe pain, and life extension. I think that people in poor countries ultimately are going to improve their length and quality of life, and there's a strong trend in that direction already. I am long on Malaria being eradicated within the next 30 years. We mostly know what to do; what's holding us back is a combination of environmental caution and the challenges of culturally sensitive governance.

I'm most concerned with the despair and suffering of the elderly and chronically ill, from a sheer "loss of utility" perspective. These problems are incredibly complex: we still just have one Alhzeimer's drug, and it buys you maybe an extra year. We don't understand how pain works. Most of the utility of the investment in R&D lies at the end of the research process, so the non-neglected nature of these problems is irrelevant from the perspective of utility. Of course, it's quite relevant from the perspective of basic fairness. That's just less of a motivator for me.

Beyond that, I'm sort of an immortalist. I think that the best way to get people to broaden their moral horizons and think long-term is to make them life longer, happier, healthier lives. I honestly do think it's an emergency that even in the industrialized world, life expectancy is only into the late 70s and our declines come with lots of suffering. You spend your best years trying to save up to afford your worst years. Preaching about animals and the poor and our descendents doesn't work on a scale big enough to change the world. The only way I see to change the situation is to dramatically improve the experience of old age and reduce chronic suffering. My intuition is that happy and relaxed people are more compassionate, and that it's fear or the experience of pain and dementia that undermine our happiness and contemplative ability.

Comment by AllAmericanBreakfast on EA lessons from my father · 2020-05-11T03:56:18.093Z · EA · GW

Thank you :)

Comment by AllAmericanBreakfast on Research on developing management and leadership expertise · 2020-03-06T07:21:06.937Z · EA · GW

Do the book and other resource recommendations especially apply to people interested in working on animal welfare?

Comment by AllAmericanBreakfast on Biases in our estimates of Scale, Neglectedness and Solvability? · 2020-03-06T07:14:12.374Z · EA · GW

Here is that review I mentioned. I'll try and add this post to that summary when I get a chance, though I can't do justice to all the mathematical details.

If you do give it a glance, I'd be curious to hear your thoughts on the critiques regarding the shape and size of the marginal returns graph. It's these concerns that I found most compelling as fundamental critiques of using ITN as more than a rough first-pass heuristic.

Comment by AllAmericanBreakfast on Biases in our estimates of Scale, Neglectedness and Solvability? · 2020-03-06T04:05:01.810Z · EA · GW

The end of this post will be beyond my math til next year, so I’m glad you wrote it :) Have you given thought to the pre-existing critiques of the ITN framework? I’ll link to my review of them later.

In general, ITN should be used as a rough, non-mathematical heuristic. I’m not sure the theory of cause prioritization is developed enough to permit so much mathematical refinement.

In fact, I fear that it gives a sheen of precision to what is truly a rough-hewn communication device. Can you give an example of how an EA organization presently using ITN could improve their analysis by implementing some of the changes and considerations you’re pointing out?

Comment by AllAmericanBreakfast on [WIP] Summary Review of ITN Critiques · 2019-10-09T20:47:58.233Z · EA · GW

I also hoped to imply that ITN is more than a heuristic. It also serves a rhetorical purpose.

I worry that its seeming simplicity can belie the complexity of cause prioritization. Calculating an ITN rank or score can be treated as the end, rather than the beginning, of such an effort. The numbers can tug the mind in the direction of arguing with the scores, rather than evaluating the argument used to generate them.

My hope is to encourage people to treat ITN scores just as you say - taking them lightly and setting them aside once they've developed a deeper understanding of an issue.

Thanks for reading.

Comment by AllAmericanBreakfast on [WIP] Summary Review of ITN Critiques · 2019-10-09T18:00:37.663Z · EA · GW

Agreed. However, one of the subcritiques in that point is the divide-by-zero issue that makes issues that have received zero investment "theoretically unsolvable." This is because a % increase in resources from a starting point of 0 will always yield zero. The critic seems to feel it's a result of dividing up the issue in this way.

I leave it to the forum to judge!

Comment by AllAmericanBreakfast on [deleted post] 2019-10-07T01:47:11.965Z

Can you give a few examples? Having options and avoiding risk are both good things, all else being equal.

Comment by AllAmericanBreakfast on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-06T07:41:03.700Z · EA · GW

There’s a range of posts critiquing ITN from different angles, including many of the ones you specify. I was working on a literature review of these critiques, but stopped in the middle. It seemed to me that organizations that use ITN do so in part because it’s an easy to read communication framework. It boils down an intuitive synthesis of a lot of personal research into something that feels like a metric.

When GiveWell analyzes a charity, they have a carefully specified framework they use to derive a precise cost effectiveness estimate. By contrast, I don’t believe that 80k or OpenPhil have anything comparable for the ITN rankings they assign. Instead, I believe that their scores reflect a deeply researched and well-considered, but essentially intuitive personal opinion.

Comment by AllAmericanBreakfast on [deleted post] 2019-10-06T00:30:11.796Z

I want to give more context for the MacAskill quote.

The most obvious implication [of the Hinge of History hypothesis], however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.

Here, he is talking about strategies for solving specific problems, X-risks in this case. This is not relevant to the cluelessness argument advanced by Mogensen and that I am addressing. Later in his article, though, he does touch on the topic.

Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment.

Buck-passing, or punting, is compatible with the "debugging" concept, but not with Mogensen's "cluelessness." With debugging, you deliberate as long as is possible or productive, and then act as wisely as possible. Once you've made a decision, you fix side effect problems as they arise, which might include finding ways to reverse the decision where possible. Although some decisions will result in genuine enormous moral disasters, such as slavery or Nazism, this approach appears to me to be both net good and our only choice.

With Mogensen's cluelessness argument, it doesn't matter how long you deliberate, because you have to be able to predict the ripple effects and their moral weights into the far future first. Since that's impossible, you can never know the moral value of an action. We therefore can't morally prefer one action over another. I'm not strawmanning this argument. It really is that extreme.

Buck-passing/punting also not identical to "debugging." In buck-passing or punting, we're deferring a decision on a specific issue to a wiser future. A current ban on genetically engineered human embryos is an example. In debugging, we're making a decision, and trusting the future to resolve the unexpected difficulties. Climate change is an example: our ancestors created fossil fuel-based industry, and we are dealing with the unexpected consequences.

The reason I don't feel the need to engage with the cluelessness literature is because, when sensible, it's simply providing another approach to describing basic problems from economic theory and common sense, which I understand reasonably well and expect I can learn better from those sources. When done badly, it's a salad of sophistry with a thick and unnecessary dressing of formal logic. I can't read everything and I think I'll learn a lot more of value from studying, oh, almost anything else. These writers need to convince me that they've produced insights of value if they want me to engage. I'm just describing why they haven't succeeded in that project so far.

By the way, I appreciate you responding to my post. Although I'm sure you can see I've got little patience for Mogensen and the cluelessness literature I've seen more generally, I think it's important to have conversations about it. And it's always nice to have someone take an interest.

Comment by AllAmericanBreakfast on [deleted post] 2019-10-05T21:07:03.589Z

Her first example of "complex cluelessness" is the same population size argument made by Morgensen, which I dealt with in section 2a. I think both simple and complex cluelessness are dealt with nicely by the debugging model I am proposing. But I'm not sure it's a valid distinction. I suspect all cluelessness is complex.

Debugging is a form of capacity building, but the distinction I drew is necessary. Sometimes we try to build advance capacity to solve an as-yet-intractable problem, as in AI safety research. This is vulnerable to the cluelessness argument. Even if we are successful in those efforts and manage to solve the problem, we still cannot predict all the precise long-term consequences. Too much moral dark matter remains. This form of capacity-building cannot stand up to Morgensen and Greaves' critique, because it doesn't address the problem they raise.

This debugging model does. Beyond our ability to build capacity to solve specific and known intractable problems, we already and likely always will have capacity to solve problems in general. Unknown unknowns become known, and then we solve them. We keep the good, fix the bad, and develop more wisdom to deal with the ugly.

I'm not planning on engaging further with the cluelessness literature because what I've seen makes me think GPI is off track. It strikes me as a combination of sophistry and obscurantism that I find hard to take seriously. This writing was an attempt to get my own thoughts in order. I invite others who find their ideas more compelling to explain why "debugging," in conjunction with a frank acknowledgement that the future is risky, can't account for cluelessness.