Excel, along with its extensions, provides a formidably featured data analysis product. Coupled with the functionality that comes pre-packaged in ERP and accounting applications, enterprises have access to a collection of analytical resources that can help inform their operations.
An honest appraisal of financial analytics solutions, then, is not a comparison of dedicated solutions against nothing at all. Rather, it is an exercise in recognizing where tools like ERP and Excel struggle, and evaluating whether the potential performance improvement of a separate analytics solution is worth the investment.
At Blue Hill, we take a broad view of “analytics.” Simply put, it encompasses any method for calculating insight from data, from simple summaries through complex modeling. With that in mind, it’s helpful to break up that grand idea into some more-digestible chunks.
To give us a foundation, we can use a quick 2-by-2 matrix (see Figure 1 below) to help place these different solutions into rough categories. On one axis, we take a look at the speed with which analyses are completed. Static reports that need to be run manually are on the left, periodic reports that are automatically generated on some set schedule are near the middle, and real-time monitoring is to the far right. On the other axis, we look at whether our analysis is reactive — happening after the fact — or proactive, giving us a view of the likely future. Figure 1 clarifies how and when financial analytics tools should be deployed, as we dig a bit deeper into how these types of analyses play out across different parts of the financial organization.
Reports: What Happened?
Reports are finance’s bread and butter: reports to management, to investors, to regulatory agencies. The relevant questions are, how much flexibility is available for customization? And how long do they take to create?
For example, getting a handle on which customers have outstanding balances through a receivables-aging report is standard fare. That’s provided out-of-the-box by most finance and accounting packages. Tweaking that standard report to alter aging buckets, or to display groups of customers categorized by some other criterion (say, by geography, industry, or manually defined risk category), may take some input from IT. Drilling down from dollar volume to customer to individual invoice may not be quite as common, depending on the individual system’s use of summary tables. That’s where we can run into a disconnect between knowing what happened and having the information necessary to do something about it.
On the other end of the complexity spectrum are quarterly and annual reports. Because their format is standardized, the big driver here is the process of creation — keeping an eye on each step, from sub-ledger to general ledger close, to consolidation of multiple business units, and ultimately to report production.
These reports’ complexity is due to the large volume of data required and number of people who participate in collecting, reviewing and finalizing it. These are mostly accounting employees who, were they not consumed by periodic reporting, could be engaged in more strategic analysis in support of management decision-making. Because the use case of standard financial reporting is typically well-established, and the data needed to run the business on a monthly or quarterly basis is explicitly defined, the real opportunity for ad-hoc or scheduled reporting is to reduce the time and cost associated with reporting, and to lower the Total Cost of Ownership.
Models, Forecasts, Predictive Analytics: What’s Likely to Happen?
Basic financial models are an area where Excel can really shine. Statistical add-ons enable regression analysis to identify which variables impact end results, and to what degree they contribute. Those that are closely tied together can be weeded out with covariance analysis to refine models so they’re not double-counting.
Since the ultimate model is simply an equation that feeds in a collection of individual data points, it is easily represented and calculated in a spreadsheet format. Model testing takes a bit more effort, either by manually inputting historical data to ensure that the results match up with what has been experienced, or automating the process through scripting or third-party modules. The main questions here are: How much data can be crunched to develop the model? How efficient and robust a testing regimen is available when relying on a spreadsheet rather than a dedicated application?
The best way to differentiate forecasting from predictive analytics is to look at what feeds into them. Forecasts focus on historical numbers and project them into the future, typically based on simple interpolation or extrapolation of current trends. Predictive analytics take the next step of identifying core drivers and relationships between past actions and current performance to understand the underlying drivers of business value. When looking ahead, predictive analytics looks to forecast the likely value of those drivers, and then works through the model to get an idea of what the results would be.
In general, Excel and other spreadsheets provide an acceptable starting point for a rough draft of forecasting financial performance and conducting an initial discovery of core drivers and variables. However, Excel outputs and modeling are best suited to be both managed and viewed on an individual basis as an out-of-the-box application. They lack governance to double-check any errors or assumptions associated with financial modeling. Once multiple people have to access, enter,or model financial data on a regular basis, spreadsheets start to fall apart as a tool to effectively support enterprise financial analytics. (Here’s just one example for caution.)
Real-Time Dashboards: What’s Happening Now?
Dashboards tell us facts: we have $1 million of payables coming due today; we have 300 invoices in the processing queue; we have run GL close this far for 16 of our 18 operating units at quarter-end. These are quick status-check items for executives to reference in order to have an informed picture of current operations. They are usually fairly straightforward calculations, well within the capability of any spreadsheet — assuming the information is there for those calculations to be run.
The reality of financial analytics is that any new solution coming into the organization has to be accompanied by a well-defined roadmap that transfers all of this historical data, querying and formatting from spreadsheets and embedded reporting tools into a centralized financial tool. This fear of being unable to fully recreate legacy spreadsheet functionality or to fully audit existing business processes can lead to a reluctance to centralize and standardize real-time reporting, even if the results can greatly improve enterprise-wide visibility to cash flow and daily operational finance decisions.
Monitoring and Detection: What’s Really Happening Now?
Perhaps the best example of monitoring and detection is the application of predictive models to identify fraudulent financial activity. It starts in much the same way as the modeling discussed above by profiling past transactions to identify factors common to fraudulent activity: invoice totals in even, thousand-dollar increments; sequential invoice numbers from large suppliers; payments to vendors with recently-updated contact information.
And as fraudsters become more clever and subtle, real-time counter-fraud efforts need to also calculate the frequency, location and category of transactions. Those are the drivers we’re looking for. Unlike before, we’re not looking to use this fraud model to predict what next year’s total will be; we use it to evaluate every transaction as it happens to find and flag potentially fraudulent activity for further review before any payment is made. And that’s just one example.
Real-time monitoring of financial transactions frequently is beyond the capabilities of spreadsheets, which are poorly designed not only to manage real-time transactions but also to serve as a system of record for real-time workflows. Even ERP systems typically work with a standard relational database that may not be well-suited for real-time transactional monitoring. To use this technology, companies often must either design their own applications, based on streaming and log-management analytics software solutions, or purchase specific software for counter-fraud or other key use cases.
Conclusion
In the enterprise context, where the financial impact of inefficiencies or poor decisions can scale up significantly, relatively modest improvements in the use of financial analytics can translate into considerable dollars.
The key is to make an honest appraisal both of current capabilities and of the potential impact of better visibility, more-efficient data gathering and compilation, and prevention of loss-related transactions (whether through non-compliance or outright fraud) that are eating at your bottom line.
Scott Pezza is principal analyst and James Haight is a research analyst at Blue Hill Research. A longer version of this article was originally published on Blue Hill’s website.