In our universe of ever-increasing data, the ways in which to access it and glean meaningful insights, unfortunately, remains rough terrain at best. Regardless of where on the “data is the new oil” spectrum you fall, there’s no denying that our digital future is here and data is its language, fuel, and currency.
In the current ecosystem, if a business requires external data to find the most relevant audience/demographic/device information and drive high-level strategy, be it for revenue or product, it needs to identify the right suppliers of that data, then likely purchase the data in bulk. The result in that scenario is receipt of millions of data points, at a high cost, which could be riddled with overlap and duplication.
It’s easy to see how this becomes a restricted option available only to large enterprises with big budgets and teams and purchased outputs that have no data specificity. In addition, searching for the right suppliers for even that unspecific, duplicated data can take months on its own. But then consider that it is combined with the costly efforts to create secure pipes to transfer data between organizations.
Clearly, there is a price to operating in the world described above and it cuts across three main vectors: time, money, and risk. As a CFO or a head of finance charged with the go/no-go decisions on a given project, it’s imperative to review the associated costs within a standard data project use case.
For example, suppose your marketing team has put forward a proposal for a large ad campaign with the potential for steady revenue. But some of the costs are not immediately clear. You’ve been asked to help evaluate the decision to forge ahead. Given that data strategies tend to be inefficient, your first area to explore is reducing resource–waste related to time. Who will be researching the data needed and where to locate it? A dedicated resource collating the data requirements for the project and then vetting suppliers to procure it is time-consuming.
Once the supply has been identified there is the effort it takes to build out the infrastructure required to ingest the data. This usually takes the form of connecting systems via application programming interfaces. Doing so securely which will require engineering expertise and technology setup.
After the teams have connected the pipes, the data analysts begin receiving and processing large, unstructured data sets. With the status quo, there isn’t much in the way of flexibility or agility with data. You get bulk data ingestion which needs to be sorted through to find and make sense of the relevant items. The inflexibility of buying data, i.e., turning on a full firehose of data with no ability to control what comes through, is another relic of an outmoded data broker ecosystem designed to fit only those companies that can afford it. What this invariably leads to is unusable (or unuseful) data for which you likely paid a pretty penny.
But you’re not done yet.
There are other costs lurking in the shadows. Money is likely the most obvious factor in deciding to take on a project from a finance perspective, but it’s still worth reviewing the various sources of strain on the wallet. Maximizing unit economics around customer acquisition costs tops the list. While the direct costs related to acquiring a customer are sales and marketing related, for data projects the net is cast even wider to include the time– and resource–related factors in servicing the client properly.
Indirectly, you’re including the hours of labor to achieve the infrastructure and insights required to complete the project. Certainly, finance leaders are aware that cash is king at any company, so reducing cash outflows on projects that require large budgets due to resource–related inefficiencies (waste) is an important consideration.
Additionally, there is the evaluation of the opportunity cost of the project. Where else could those resources have gone? Were they able to retrieve the most relevant data for the successful completion of the project?
The analysis around data costs culminates in the need for a risk mitigation strategy for your data. Rules and regulations regarding data use and privacy are continuously evolving, and any data strategy that is not diligent in its approach to data governance risks exposure to incurring substantial future costs through fines, legal challenges, and loss of customer trust or goodwill. You can minimize risk and uncertainty in your data strategy by prioritizing controls, security, and compliance.
Establishing control over data management (vs. having it dictated by external authorities) and ensuring security at both the data and organizational levels will reduce the need for expensive corrective actions down the line. Because the rules around data are evolving, maintaining compliance and good data governance are key to staying ahead of changes that could have a financial impact on your organization.
In short, it is important to evaluate your data strategy across the below three vectors and their subcomponents.
Maximize efficiency by reducing resource waste related to:
Maximize unit economics by:
Minimize risk and uncertainty in your data strategy by:
What all of this leads to is the end state most of us are seeking: data democratization. This isn’t just an interesting buzzword, it’s the opening up of access channels to data, regardless of an organization’s size. This has a clear beneficial impact on the bottom line. In times like these, where agility, flexibility, and efficiency can markedly differentiate a business and help it gain a competitive edge, open access to tailored data becomes THE requirement for truly owning your own data strategy.
Yasmin Siddiqui is vice president of finance at data streaming platform Narrative I/O.