Businesses have always needed to create and maintain accurate and complete records of their operations and finances. Whether kept on stone tables, engraved scrolls, hand-written ledgers or computers, accurate, complete and up-to-date information has become an essential foundation for business operations. Over the past two decades, businesses in the developed economies probably invested well over $500 billion in information systems that both manage and depend on this data. So why do we still have so many issues with trusting the results?
In many organizations, fragmented, inconsistent product or service data slows time-to-market, creates supply chain inefficiencies, results in weaker-than-expected market penetration and drives up the cost of compliance and reporting. Fragmented and inconsistent customer data hinders accurate revenue recognition, introduces reputation risk, creates sales inefficiencies, results in misguided marketing campaigns and threatens customer loyalty. Fragmented and inconsistent supplier and human capital data have similar-scale effects, as does an inconsistent chart of accounts across divisions or subsidiaries that makes consolidation and reporting complex, slow and error prone.
Those categories of data (product, customer, supplier, financial structure and people) are only five of a large number of key areas of business information commonly referred to (at least by the IT community) as Master Data and can be a significant volume (as much as 15 percent) of non-transactional enterprise data. The collection of policies, processes, governance approaches and tools that help a business deal with the creation and management of these data-sets make up Master Data Management (MDM). Performing MDM well is difficult, both technically and operationally. That’s one reason why most of the world still runs (at least in part) on spreadsheets.
How did the mishandling of master data happen? Well let’s dig a little deeper into what’s been going on with enterprise data for the past 45 years or so. Each time a business buys or sells something, a “transaction record” gets created. In modern information systems, the data in the record is constructed from master data references (product, customer, contract terms, sales person, location, etc.) and from data unique to that transaction (quantity, date, calculated prices, shipping costs and so on). The master data references don’t change much, but they do change sometimes and when they do we have a potential problem: How do we link the new values to the old ones and how do we ensure that everyone knows which value of the reference data to use?
These kinds of changes to master data actually occur more often than you might think. Product names change. Specifications and offer bundles change. Companies move, merge or split. People move from job to job and place to place, get promoted, retire. All these changes threaten the correctness and consistency of the master data.
We may even have business practices that create problems. I once worked on an ERP implementation with a specialty fashion retailer that reused the stock keeping unit (SKU) codes for its products on a seasonal basis. Product-line profitability calculations were very complex during the quarters when product lines changed over. For a while, each SKU value could refer to two different products. Even adding new master data can be a challenge — to the system, AT&T is not the same as A.T.&.T. or AT&T Inc. or even A T & T — unless you have sophisticated ways to handle aliases.
Deleting entries can also be a problem, breaking what data administrators call “referential integrity.” Many old transaction records are still there, but now have entries for which there is no current master data reference. Reporting systems (and some database management utilities) generally fall over when this happens.
For the first 25 years of corporate computing, we basically put up with these issues. Information systems were created in silos. Each system held its own master data and we had plenty of cheap computing power to translate this data from one silo to another when needed. “ETL” (extract, translate, load) software handled most of the problems caused by inconsistent data, reported exceptions that couldn’t be handled automatically and relied on humans to manage the resolution of inconsistencies (workable, but not always good for an audit trail).
However, with the wave of activity associated with process reengineering, increased compliance requirements and enterprise resource planning (ERP) systems, ETL approaches were not always able to support the level of information integration, commonality and consistency that ERP systems expected (and, increasingly, needed to function). The need for (if not always the practice of) Master Data Management was born.
By then, however, we were a long way down the process automation road, IT costs were already high (and rising every year) and there was little appetite for a major clean-up that (1) wasn’t going to add much if anything to the bottom line and (2) carried a good deal of operational risk. So, in most cases, we continued to live with the problem. Some clean-up occurred. Some reference data was outsourced (which didn’t always improve things, but at least there was someone to blame for the errors). MDM projects went on the CIO’s wish list, but seldom got beyond the planning or pilot stage: too many competing priorities; too hard to do; too expensive.
That was a mistake. The lack of a single source of consistent master data drives up system and storage costs; adds complexity; wastes resources in error correction and “harmonization”; and exposes compliance and regulatory reporting risks. It’s also becoming a competitive disadvantage in an era where information-based analytics and business intelligence capabilities are becoming more and more important. MDM isn’t just an IT issue anymore — it’s a critical business issue and needs to be treated as such.
The good news is that the tools and processes needed to do MDM well (if not perfectly) are now available and have become much more reasonably priced. Cleaning up master data will never be cheap or easy, but it’s becoming a necessity — and waiting isn’t likely to make the problem smaller. Spreadsheets are great tools for many things, but it’s time to retire them in favor of integrated systems that can access complete, accurate and up-to-date master data that’s consistent and trustworthy.
John Parkinson is an affiliate partner at Waterstone Management Group in Chicago. He has been a global business and technology executive and a strategist for more than 35 years.