Towards effective entrepreneurship: what makes a startup high-impact?

post by Michael_PJ · 2017-11-26T17:01:59.212Z · score: 5 (5 votes) · EA · GW · Legacy · 7 comments

Contents

  Introduction
  What makes a startup high impact?
    Impact model
      Customers
      Third parties
      Employees
      Impact mechanism
    Assessing the impact model
      Scale
      Tractability
      Counterfactuals/Neglectedness
None
7 comments

Introduction

This post owes a great deal to prior work and thought by Spencer Greenberg, Eric Gastfriend, and Peter Hartree.

This post is a summary of the object-level thought on what makes a startup high impact which we developed while working on the Good Technology Project.

A lot of this material is more-or-less obvious applications of EA thought to startup theory. Nonetheless, it managed to be surprising and useful to people, so perhaps it is less obvious than it seems. I’ve condensed the presentation given the intended audience of this post - there is a lot more to say on many of these points. This material might have eventually developed into a “guide” to effective entrepreneurship.

In addition, some of the material relates to how to manage a startup in later stages. We never really got a chance to try that out, so it is especially speculative.

What makes a startup high impact?

We’re interested in startups because we think that they might be a mechanism by which we can have a large positive impact on the world. But what are the qualities that we should look for in a startup?

Impact model

Before we get started on assessing how good a company is, we should try and get clear on how that company benefits the world, to make an impact model for the company.

The first big consideration is who the company helps. Usually there will be one group in particular that you are expecting to benefit. A good way of figuring out who these are is to consider the various groups of “affectees” for your company.

Customers

Customers are the most obvious people who benefit (or suffer) from the existence of a company. They pay a cost in money and time, and they gain your product in return.

For example, Mesh Power’s1 primary beneficiary group is its customers (insofar as you think that promoting clean energy over burning kerosene might have environmental benefits, Mesh Power may also have some externality benefits, see below).

While we can usually assume that people will buy things that actually improve their lives, this isn’t universally true. Cigarettes and addictive drugs or games are examples of this.2

Third parties

The operation of your company will also affect people who are not part of the transaction, or even involved at all. These effects are called externalities. Often these are positive, in the case of innovation and economic growth, but they can also be negative, such as pollution, developing dangerous new technologies, or causing technological unemployment.

For example, despite being a car company, arguably Tesla’s primary beneficiary group are third parties, because accelerating the progress of electric cars and storage will help to ameliorate climate change.

An important class of externality is benefits produced by your customers, which will often happen if you’re selling to businesses or institutions. For example, disease outbreak monitoring systems may be sold to governments, but the beneficiaries are the people who don’t get ill because of the government’s’ improved preventative action.

Employees

A third category of beneficiaries is your employees. They will gain pay and satisfaction from working for you, but will also spend their time. In bad cases they could experience physical or psychological harm because of the job.

For example, one of M-PESA’s beneficiary groups are among its employees, since it needs lots of places for customers to buy and sell mobile money, and this provides additional income for a lot of relatively poor shop owners.

Impact mechanism

The next thing to do is to work out how you think your startup will actually affect your target group of beneficiaries. This is likely to be very uncertain, especially if you expect to create an impact through externalities. However, it’s better to explicitly write down what you’re uncertain about nonetheless.

For example, here’s one mechanism by developing a better test for drug-resistant TB might improve wellbeing:

  • Decrease cost of TB test
  • Increase availability of test in low-resource areas
  • Accurately distinguish more cases of drug-resistant TB from normal TB
  • Give more drug-resistant TB sufferers the correct drugs
  • Cure more people of drug-resistant TB than otherwise
  • Fewer people go through the lengthy suffering of drug-resistant TB
  • Increase wellbeing

There may well be several such mechanisms, of course!

Once you have an explicit impact mechanism, that gives you a two useful things: a set of hypotheses about how your impact occurs, which you can test; and a set of stages in the mechanism which you can measure.

Most of these won’t be things you can test or measure now, but it’s worth thinking from time to time whether you might be able to measure more of them. For example, in early development you might focus on measuring the cost of the test, but as you roll out you might also be able to measure improvements in availability.

Assessing the impact model

We can apply our usual INT heuristics in this case, although we can pick out some particular considerations for the domain. These can work both for picking out a broad problem area, and for directly picking out factors relevant to a particular impact model.

Scale

As ever, we care about both how many people we help and how much we help them.

We should think about maximum scale here: if you could eventually sell your product to everyone on Earth, that’s better than if you’re limited to just one national market. If we think about our possible beneficiary groups, third parties tend to be the biggest group, followed by your customers, and then your employees.

Similarly, a life-saving product is much better for each person than something that merely saves them some money.

Tractability

There are a couple of big things that affect tractability.

The first is obvious: the problem may be hard. Or the problem may be easy, but making it profitable may be hard. And we’re primarily thinking about businesses here, so if you can’t make it profitable, you can’t do it.

Secondly, you might not want to do it. Running a business is hard work, and you face pressure not only to drop out, but to cave in on issues where your investors or advisors may not be aligned with what you want. If your beneficiary group is your customers, then your profit goals and your impact goals are relatively aligned, so this may be easier.

In other cases this is less likely. For example, Uber (may be) benefiting its 1.5 million drivers. But they are not incentivised to employ these people, because doing so costs them, so as soon as they can automate them away, they will.

Finally, you might not be able to figure out what to do. Even if you can identify the problem, you may not be able to figure out a plausible mechanism to actually have an impact on it, or your mechanism might fail to work.

Tractability issues result in two big failure modes:

  • The business fails entirely
  • The business succeeds, but it has a low or negative impact

Counterfactuals/Neglectedness

Assuming that you start a business that solves a real problem, we can assume that someone would have solved it eventually. That means that the effect you have is the difference between those two, which will look like getting X extra years of the solution. We can call this your time advantage.

Generally, the bigger the time advantage the better. If the problem is big enough, then even a short time advantage may not be a problem - getting a malaria vaccine a year earlier would be huge!

But generally bigger is better. There are a few ways you might have a big time advantage:

Firstly, the technology you use has existed for a while but hasn’t been applied to the problem that you are applying it to. That suggests that it would continue to be unsolved in that way for a long time if you don’t do it.

Counterintuitively, this suggests that you should stay away from new technologies: it is very likely that someone will try “machine learning for X” relatively soon, so it is unlikely to be neglected.

Another common case is that the problem requires an unusual combination of skills, knowledge, or inclinations. For example, you might know about both financial services and the developing world, while also being altruistic. Combinations of traits are correspondingly rarer - if you have at least one moderately rare skill, then it is likely that you also have one very rare combination of skills. It may be a long time before this combination comes along again, and so if there are problems that require it, they may go unsolved until then.3

This suggests that you should look especially hard for problems that only you (or you and your friend with the other unusual skills) can solve, because that is likely to give you a big time advantage.

Finally, the incentives to solve the problem may be lacking (e.g. the customers are poor). This is a tough case, because those incentives will also be lacking for you. So you need a good story about how you are going to keep your impact on track. Many benefits to third parties have this form. Often if the externality is innovation then a strong founder can ensure that most of the benefit is produced before they are phased out. For example, Tesla has chosen to give away their patents for free, which might not have happened with a less altruistic CEO.

  1. Sadly, it looks like they’ve gone bust since I last checked, but they’re still a good example of the principle. 

  2. Spencer Greenberg’s podcast discusses some of the ways startups can unexpectedly cause harm. 

  3. Peter Thiel talks about “secrets” which are unusual beliefs that you have which make you think that a problem is soluble, even though the general belief may be that it is not. These are another thing that can make you unusual. 

 

7 comments

Comments sorted by top scores.

comment by MichaelPlant · 2017-11-26T22:53:12.017Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks very much for this. I just want to add a twist to this:

Counterintuitively, this suggests that you should stay away from new technologies: it is very likely that someone will try “machine learning for X” relatively soon, so it is unlikely to be neglected.

EAs don't have stay away from new tech. You could plan to have impact by getting rich via being the first to build cutting edge tech and then giving your money away; basically doing a variant of 'earn to give'. In this case your company wouldn't have done much good directly - because what you call the 'time advantage' would be so tiny - and the value would come from your donations. This presumes the owners of the company you beat wouldn't have given their money away.

comment by Michael_PJ · 2017-11-27T19:58:02.231Z · score: 1 (1 votes) · EA(p) · GW(p)

Yes - I should have clarified but this is deliberately not addressing the "earning to give through entrepreneurship" route. I should have mentioned it because it's quite important: I think for a lot of people it's going to be the best route.

Aside: if I think earning to give is so great, why have I been spending so much time talking about direct work? Because I think we need to do more exploration.

comment by Benjamin_Todd · 2017-11-27T01:14:08.617Z · score: 1 (1 votes) · EA(p) · GW(p)

Yes, there are other instrumental reasons to be involved in new tech. It's not only the money, but it also means you'll learn about the tech, which might help you spot new opportunities for impact, or new risks.

I also think I disagree with the reasoning. If you consider neglectedness over all time, then new tech is far more neglected since people have only just started using it. With tech that has been around for decades, people have already had a chance to find all its best applications. e.g. when we interviewed biomedical researchers, several mentioned that breakthroughs often come when people apply new tech to a research question.

My guess is that there are good reasons for EAs to aim to be on the cutting edge of technology.

comment by Michael_PJ · 2017-11-27T20:05:17.998Z · score: 1 (1 votes) · EA(p) · GW(p)

Let me illustrate my argument. Suppose there are two opportunities, X and Y. Each of them contributes some value at each time step after they've been taken.

In the base timeline, A is never taken, and B is taken at time 2.

Now, it is time 1 and you have the option of taking A or B. Which should you pick?

In one sense, both are equally neglected, but in fact taking A is much better, because B will be taken very soon, whereas A will not.

The argument is that new technology is more likely to be like B, and any remaining opportunities in old technology is more likely to be like A (simply because if it were easy to do, we would have expected someone to do it already).

So even if most breakthroughs occur at the cutting edge, so long as we expect other people to do them soon, and they are not so big that we really want even a small speedup, then it can be better to find things that are more "persistently" neglected. (I used to use "persistent neglectedness" and "temporary neglectedness" for these concepts, but I thought it was confusing)

comment by Benjamin_Todd · 2017-11-28T04:31:31.665Z · score: 0 (0 votes) · EA(p) · GW(p)

OK, I agree that makes sense as well - it now seems unclear which way it goes.

However, if you're thinking from a career capital or more long-term future perspective (where transformative technologies are often the key lever), my guess is that EAs should still focus on learning about cutting-edge technologies.

comment by Michiel · 2017-11-27T18:20:34.704Z · score: 0 (0 votes) · EA(p) · GW(p)

One of the things I find hard is the externalities, because often there are tons of things that a company is influencing. For example, with Heroes & Friends (our company) we try to built a platform for social movements (NGOs, social enterprises, etc.) and we don't control who is using it. So it can be used for ineffective movements but also highly effective ones. However, in our view we see a new society emerging where people take action themselves and take responsibility to improve their own community and help other people too. So on the surface it might have less direct impact (depending on the users) but on the long-term we want to be the market place of the 'informal economy' where people can 'harvest goodwill'. In order for this bottom-up economy to self-organize it needs a system or marketplace that provides the technology to do so, and we are basically building the best software for social movements to grow. But how would you include or exclude externalities? Which ones do you count and which ones do you leave out?

Is it a positive externality that more than 1 million people read good news stories and opportunities to act in their social media because of our platform or not? Is it a negative externality that many projects are not optimalized for 'doing the most good'? I'm just wondering how we could measure this for our own company but also for many others because I think there should be a lot of data points included.

comment by Michael_PJ · 2017-11-27T19:54:56.528Z · score: 1 (1 votes) · EA(p) · GW(p)

I think it's worth trying to have a toy model of this, even if it's mostly big boxes full of question marks. Going down to the gears level can be very helpful.

For example, it can help you answer questions like "how much good does doing X for one person have to do for this to be worth it?", or "how many people do we need to reach for this to be worth it?". You might also realise that all your expected impact comes from a certain class of thing, and then try and do more of that or measure it more carefully.

Which externalities to include is a tough question! In most examples I think there are a few that are "obviously" the most important, but that's just pumping my intuition and probably missing some things. I think often this is a case of building out your "informal model" of the project: presumably you think it will be good, but why? What is it about the project that could be good (or bad)? If you can answer those questions you have at least a starting point.

One final thing: when I say "negative externality" I mean something that's actively bad. It seems unlikely that people using your platform for ineffective projects is bad, but rather neutral (since we think they're not very effective). What might be bad could be e.g. reputational damage from being associated with such things.