The Cloud

The Paradox of Plenty

If we improve the price performance of a product or service fast enough for long enough, we will see demand grow. But that doesn't last forever.
John ParkinsonDecember 16, 2014
The Paradox of Plenty

The principles of aligned supply and demand in economics are pretty inflexible. If something is scarce its free market price will rise. If it’s abundant, the price will fall. If the price rises too far, innovators will bring lower cost alternatives to the market. If the price falls too far (and there is reduced or no profit available) consolidation occurs, or price controls (directly or via subsidies) are introduced.

But there are situations where applying these principles without fully understanding the context can lead to mistakes when analyzing trends.

Drive Business Strategy and Growth

Drive Business Strategy and Growth

Learn how NetSuite Financial Management allows you to quickly and easily model what-if scenarios and generate reports.

disruptive innovation

John Parkinson 2017 Tech

Let’s look at a corollary. When a product or service becomes more productive (in this instance, assume we get more output for the same or lower cost) we should need less of it to get the same amount of work done. There will be less demand. Over time, the supply will drop to match the reduced demand, potentially increasing the unit price to compensate for reduced volume. Eventually a new equilibrium will be reached. Even if demand grows steadily, if we can improve productivity significantly faster, we should still see supply drop.

Sometimes, however, we see a different outcome.

If we can improve the price performance of a product or service fast enough for long enough, we will see demand grow rather than shrink. This is called “the Jevons paradox” and it occurs because we find new things to do with the better, cheaper product or service — things we can now afford but that were previously too expensive or difficult. The original example here comes from the industrial revolution (Jevons was a nineteenth century English economist). Radical improvements in the efficiency of the then newly invented steam engine should have reduced the demand for coal to fuel the generations of ever more efficient engines. But the new engines were also cheaper to manufacture and operate, allowing many new uses and actually increasing the demand for coal to fuel them. That created a demand for more efficient mining, generating demand for more automation, requiring more steam engines and so on.

For the past 50 years, Moore’s Law (in its several variants over time) has fueled a similar example of the paradox. As the cost per transistor dropped roughly in half every 18 months (now closer to 24), we should have been able to get by with fewer computer servers in enterprise data centers. The automation of core business processes hasn’t changed all that much in the past 30 years; server performance has improved probably 100 fold; even the relatively poor improvement in software performance should have seen the server market peak around 20 years ago.

It didn’t, because an explosion of new workloads, enabled by low-cost, ever more powerful servers, required vast amounts of new computing capacity. “Industry standard server” (ISS) volume grew rapidly even as larger proprietary server volumes peaked and declined. Even the recent improvements in utilization of ISS servers (mostly via “virtualization” — running many virtual machines on a single physical server) only slowed the rate of growth. Not until we started to consolidate the widely dispersed pattern of company- at-a-time server deployments into “hyperscale” cloud data centers (further driving up average utilization rates) have we seen the growth curve turn down.

Opinion_Bug7The ability to maintain the Moore’s law performance improvement curve built Intel and its x86 server architecture into the dominant player it is today.

But it’s getting harder to keep up the momentum. The complexity and cost of manufacturing today’s state-of-the-art processors (which increasingly integrate functions like graphics, memory management and network interfaces that until recently required additional specialized integrated circuits to perform) are getting close to diminishing returns. At a “feature size” of 14 nanometers (where Intel is today) some essential elements of a transistor are only a couple of dozen atoms wide.

Strange things start to happen when things get this small — and the next step down to 10nm will make things even less well defined. We aren’t out of design, materials and manufacturing tricks yet, but it’s possible to see the end of the road for what we know we know how to do. While it might be technically possible to make smaller transistors which work faster and consume less power, it looks as if they will no longer be cheaper. The economic half of Moore’s law will end sometime before 2020 and with it the proliferation of new workloads driven by lower cost transistors.

That’s not to say the sky is falling. We are still not using the computing tools we have to their full potential and there are interesting new ideas about how to make better use of the transistors we can already build cheaply. But relying on a trend, however attractive it seems, to continue forever is never a good idea and Moore’s Law (first proposed in 1965) has had a very good run.

And it’s worth thinking about whether there are other places in your business in which the paradox of abundance applies. It might change your strategic thinking if you find some.

John Parkinson is an affiliate partner at Waterstone Management Group in Chicago. He has been a global business and technology executive and a strategist for more than 35 years.