Big Data Doesn’t Require Big Dollars

Increasing processing power is not the best answer to the large volume of streaming data that is overwhelming IT systems.
Alan CyronDecember 12, 2017
Big Data Doesn’t Require Big Dollars

By now every CFO understands the value of big data. Big data allows organizations to obtain valuable insights to close sales, reduce operating expenses, avert cyber attacks, and assess business tactics.The goal is to process massive amounts of information to make more informed, rapid decisions.

Hindering that objective is the accelerating growth of data. As data volume continually expands, the cost to extract value from this storehouse of business intelligence expands with it. The Internet of Things (IoT) alone is expected to produce 600 zettabytes (ZB) of data by 2020, up from 145 ZB in 2015. One ZB is equivalent to the amount of data stored in 250 billion DVDs, or about one billion terabytes.

Drive Business Strategy and Growth

Drive Business Strategy and Growth

Learn how NetSuite Financial Management allows you to quickly and easily model what-if scenarios and generate reports.

To expand processing power to maintain pace with the growth in data, CFOs must make expensive ongoing investments to increase computing power (not to mention storage and networking).

Blame this problem in large part on traditional data architectures, particularly the use of edge systems that reside at the periphery of a network. Such systems provide a layer of security around core business systems — the various applications used by each company department. Since an edge system is effectively an outer barricade, consumers must first “knock” on the edge system’s “door” to be permitted entry. Based on the system’s threat-detection analysis, they may be blocked at the entryway.

What does this have to do with big data? Edge systems slow down the processing of data, much like security personnel at an airport hinder access to a terminal gate. Say a consumer initiates a peer-to-peer payment transaction from a mobile application. To let this incoming traffic pass through, the edge system’s logic analyzes if the person may be a hacker and if the transaction is legitimate. The logic reaches into back-end legacy systems and data lakes, performing millions of calculations to make additional determinations, hopefully in a second or less. Take longer and the consumer may move on to another website, squandering a potential transaction.


That same experience confronts all others seeking to enter enterprise IT systems, such as suppliers, vendors, and company executives through their mobile devices. “The volume of streaming data is overwhelming IT systems,” Gartner describes the environment. “As a result, many businesses are `blind and deaf’ for minutes to hours at a time.”

Lost Time

The traditional solution to this dilemma is to throw more money at increasing processing power. But since the volume of data is continually enlarging, there is no end in sight to those ongoing expenses. Obviously, this is a no-win situation for  CFOs who must balance the company’s investments in innovation, growth, risk management, and cost reduction when allocating resources.

Obviously, the solution is to somehow achieve high-volume, high-speed data processing at a low cost-of-ownership. This “somehow” is now at hand. Known as Hybrid Transactional/Analytical Processing (HTAP), or Hybrid Memory Architecture (HMA), it involves the use of a relatively simple, scalable data architecture composed of a real-time database and a historical database (hence the use of the word “hybrid”).

This is tomorrow’s state-of-the-art data architecture. Gartner, Forrester, and 451 Research perceive HMA as a viable means to achieve a cost-effective alternative to traditional data architectures. “[It’s] a new approach that delivers consistent, trusted, reliable, and low-latency access to data while lowering costs of ownership,” Forrester says.

HMA provides a unique way to process more data in less time. Banks, telecommunications companies and other enterprises with mission critical data have begun to use this architecture. Given the exponential increase in their data from clicks, swipes, micropayments, cyber packets, social feeds, and meter readings, such businesses no longer need to fear becoming “blind and deaf” for any period of time.

Without getting too into the nitty gritty of the technology, HMA allows data to be kept closer to where business decisions need to be made. It leverages the benefits of memory-based storage technologies and delivers the reliability needed by enterprises at high capacity and performance. Since this technology needs fewer servers to deliver on these benefits, it is fast gaining traction among enterprises with core systems and edge-type web applications.

An example of this data architecture is a ride hailing company that needs to store only a small array of real-time data — driver tracking, passenger tracking, traffic conditions, optimal routing, and whether or not the passenger likes pumpkin spice lattes (and whether there’s a coffee shop nearby to whip one up). The architecture is built with logic that provides only this real-time information. The business data that derives from this real-time data flows into the historical database. At a much lower price point, the business can achieve mission-critical services at near-instant speed.

In the vendor marketplace for HMA, the focus is on ever-greater processing speeds of constantly enlarging data volumes. My company’s own hybrid memory architecture is capable of handling internet-scale volumes with 10-to-1 price-to-performance savings.

For CFOs of both established and emerging enterprises continuously digging into their corporate wallets to increase computing power, HMA can be a cost-effective alternative worth discussing with corporate IT leaders — before the zettabytes come marching in.

Alan Cyron is the CFO of Aerospike, provider of a flash-optimized, in-memory open source NoSQL database.

Image: Thinkstock