Forecasting the Future of the Internet of Things

What’s different about the Internet of Things is the sheer scale of the deployment.
John ParkinsonApril 8, 2015
Forecasting the Future of the Internet of Things

Every now and again I get asked to do work to predict how some topic of interest will develop over the long term, usually at least ten years. I’ve been doing this for a long time – and I’ve learned that the best you can generally do is to get general directions and macro effects approximately right.

disruptive innovation

John Parkinson 2017 Tech

Beyond that, the only certainty is that you’ll mostly be wrong. Given that I generally work in areas related to emerging or evolving technologies, this is especially tough. As short as incremental product cycles have become, fundamental breakthroughs don’t happen predictably – and exploiting new capabilities takes time.

We generally see what I call the “radio with pictures” problem. When something new shows up (broadcast TV) we try to incorporate it with what we already know how to do (radio announcements). So early TV is an announcer speaking into a microphone while you watch him (there weren’t many hers) read from paper notes. It took a long time to understand the possibilities of TV: from live broadcast of events to produced content and serials.

4 Powerful Communication Strategies for Your Next Board Meeting

4 Powerful Communication Strategies for Your Next Board Meeting

This whitepaper outlines four powerful strategies to amplify board meeting conversations during a time of economic volatility. 

So it is with many new capabilities. Their potential won’t be realized until we start to think differently about how to use them – and to understand their limits, both technical and economic.

Which brings me to the Internet of Things (IoT), sometimes called the Internet of Everything (IoE).

If you happen to live under a rock, you may not be aware of the IoT as the next big thing in technology. The idea is simple. As the cost of sensors of many kinds, non-volatile data storage, network connectivity, and computing continue to decrease and the capabilities available at any price point continue to grow, we can instrument “everything.” We can record data about what each thing does or what goes on around it; continuously analyze what’s going on; predict what’s about to happen; and (if appropriate) adjust each “thing” to ensure optimum performance.Opinion_Bug7

This is not a new idea. Modern military aircraft cannot fly without constant micro-adjustment of their lifting and control surfaces and engines. RFID tags on pallets, cartons, and higher value items have been in use for a decade. Embedded sensors in roads represent attempts to ensure traffic flow. What’s different about IoT is the sheer scale of the deployment.

Rather than a few billion “smart” devices (which is what we have today – already there are more devices than people), we will have tens of billions generating tens of trillions of data packets that need to be processed, analyzed, stored (if necessary), and acted on. That’s not going to work for the kinds of platform architectures we have experience with over the past 50 or so years of computing and networking technologies. What we know is “radio.” What we need to invent is ways to use “TV.”

Let’s look at some of the challenges.

  • Bandwidth: Sure, we have lots of bandwidth today. But it’s mostly in the network “core” not at the edge, which is increasingly wireless. With only a few billion end points (mostly people watching streaming video), we are already at saturation point for some types of traffic. In contrast, a fully deployed, smart meter system for utility consumption monitoring of every structure in the world would require as much additional bandwidth as is currently consumed by all that video. Several smart meter pilots have already stalled because of this. And smart meters would be less than 5% of the total devices deployed and connected by 2025.
  • Latency: Latency is why high frequency traders locate near exchanges and take pains to minimize the length of physical path their orders must traverse. The further you are away from the point at which the IoT data is collected by the sensors, the longer it takes to get an answer back to the device so that it can act on your analysis. There’s a simple physical minimum here (the speed of light limit is around 10 microseconds per kilometer under ideal conditions), but in practice we never get close to that. The “ping” time – the latency you get when you send a simple packet to see if the destination address you want is there —  is usually a few tens of milliseconds and can be much higher. Round-trip latency from the United States to India is around 400 milliseconds. Add in traffic contention from all that new data you’re sharing the network with, and the inevitable computational delay for analytics and prediction and you could be looking at close to a second for the full latency.
  • Storage: It might seem to be counterintuitive, but the larger your data stores get, the less computationally efficient the data becomes. Read and write times might look just fine, but finding the data you need takes longer and longer the more data you accumulate. In part, this is because today we move the data to the processor, and there are limits to how fast we can do this.
  • Power: If we can plug sensors and IoT devices into the power grid, we are generally fine (although the sheer number of devices will generate a lot more demand for electrical power). But there will be many types of sensors and attached devices that won’t have access to a power cord or (in the case of vehicles for example) to a generator. These situations will require batteries, preferably batteries that can be recharged rather than need replacing. And batteries have been a bottleneck for several decades now. So IoT devices will need to be very low power to make the most of what’s available, which limits how powerful their radios can be, how much bandwidth they can maintain and how extensive their processing power and local storage can be.

Take all these factors together and the architectural models we use today, which you can think of as a “dumb” network edge (with all sensor data flowing “inwards” to the core for processing and the results flowing back to the edge), won’t work. We need a new approach.

Today’s architectures are the way they are for good reasons. Centralizing expensive capital assets makes sense when the capabilities they represent have economies of scale. They’re easier to manage and keep operating efficiently, reliably, and securely. And we have grown accustomed to a “compute centric” architectural approach at enterprise scale. IoT will have to be different.

So let’s look at some of the options we have:

  • A pure “mesh” architecture: In this model (which is at the other extreme from today’s highly centralized approach), everything is “local,” with each IoT device storing its own data and handling its own computing. Devices are generally low or very low power and use non-volatile storage. Each device connects to all of its immediate neighbors (via a wired or wireless connection) and run software that allows the mesh to pool and share resources as workload demands fluctuate. Specialized devices in a neighborhood know about other neighborhoods, but there is no overall “map” of everything – and there are no centralized capabilities at all. Resilience is provided by local redundancy, and the mesh is self-repairing, not relying on a fixed set of connections, but adjusting as needed to use what’s available. The approach is very efficient for local control and optimization, but lacks a good way to identify events that have a widespread impact.
  • A hybrid architecture, combining local mesh with as many layers of aggregation as makes sense. In this model, most activity is still local to the device. But the software platform knows how to (a) connect to and place requests for actions from the next level of aggregation (which may support a “physical” neighborhood, where all the connected IoT devices are adjacent or a logical neighborhood, where all the connected devices have the same latency needs) and (b) pass some or all “local” data to the aggregation level for additional analysis and archival storage. At some point the next level of aggregation becomes what we think of today as the “cloud” and there may be several layers of increasingly less “local” clouds.

As an example, consider smart transportation. Your car (whether manually or autonomously controlled) will collect and store:

  • Performance data from the power train, transmission, brakes, steering, and stability systems. Routine adjustments will be made to account for traffic conditions, road conditions, and weather. Periodic alerts for preventative maintenance can be issued to both the driver/owner of the vehicle and the assigned service center.
  • Driving data from position sensors, accelerometers, braking systems, and cameras will be stored and analyzed to assess driver performance from perspectives of safety and economy. In extreme circumstances.
  • Your (and your passengers) consumption of infotainment from the navigation system radio, on board music system and video displays.

History would suggest that we will end up with a mixture of all three approaches (dumb edge, smart mesh, and hybrid) with workload characteristics determining what works best. Our smart transport example has a mesh of dumb sensors within the vehicle aggregating data to three level one “servers” (vehicle systems, driver aids, infotainment) which each “mesh” with other cars and sections of the road. Aggregation for each type of data then occurs to service centers, traffic zones, and content providers and then to manufacturers, regional transport systems and genre managers.

The critical design decisions need to focus on what can best be done locally (manage fuel consumption) with what needs a broader but still relatively local perspective (route planning) and what needs a global view (vehicle design).

This could be very exciting and create a lot of value in a multitude of areas. Or it could turn into radio with pictures.

John Parkinson is an affiliate partner at Waterstone Management Group in Chicago. He has been a global business and technology executive and a strategist for more than 35 years.