What the Heck is "Analysis?"
The analysis you do depends on who you are, what kind of data you’re playing with, and what sort of result is required.
If you’re an applied mathematician, then your mindset is trend analysis, statistical analysis, optimization, predictive modeling, and so on. And, you’d better be working with clean data – either it’s already clean, or a team of data wranglers has (very carefully!) cleaned it for you.
Unfortunately, the more complex the analysis and the dirtier the data, the more uncertain the conclusions. Massive swings in outcomes can occur by applying different acceptance criteria for data, using looser or tighter constraints, playing with error terms, wrangling the data differently, and so on. It’s easy to end up with mistakes like “cold fusion.”
On the other hand, there’s everyday analysis that’s conceptually quite simple. Identifying and moving excess money from zero-interest checking accounts to interest-bearing accounts. Locating and donating clothes you haven’t worn for years. Sorting the laundry by category to avoid your wife’s sweater shrinking into doll clothes (OK, I did that). You don’t need a perfect answer, and action can be taken immediately.
Procurement analysis is much more like everyday analysis than it is about complex math, despite punditry from people who should know better. High-value procurement analytics is mostly about classifying and understanding dirty data. What channels are we buying from? Are we buying from preferred vendors? Do we have contracts, are we buying from them, and are we getting the contract price? Are we paying different prices for the same item? What are we buying on and off PO, and should that change (either way)? What business units are underperforming on a cost per employee basis?
These are all classification questions, solvable by mapping data into understandable categories. Classification and classification value – mapping – never stops. If you’re not mapping multiple dimensions all the time, you’re not digging for value. And, once initiatives are identified, those need to be classified also – easy, medium, or hard to implement; high, medium, or lower potential savings. Sourcing plans must be developed and refined, assigned to managers, and marked with progress indicators – another classification exercise.
The State of Spend Analysis: “Do your work elsewhere”
Most spend analysis offerings focus entirely on identifying the “what” of “who buys what from whom” – mapping payment transactions to a commodity structure that approximates the “what.” Usually this is done with an offline procedure unavailable to end users. The delivery mechanism is a read-only database inside a BI tool.
But that’s just a baby step. The spend analysis system’s boilerplate dashboards and “impress the executives” pretty pictures are sales tools of little value beyond a first look. Since new dimensions and new analyses can’t be created inside the tool, spend analysts looking for opportunities must extract data to Excel to perform their own further classifications and analysis.
Excel is a terrible place to do this, of course. It can’t handle millions of transactions. Mapping data is awkward and not intrinsic to its functionality. Changing anything without introducing hard-to-find errors is difficult. Audit trails are nonexistent. Maintaining complex workbooks is a nightmare. Everyone knows this; we’ve all been there.
This is why Spendata focuses on analysis inside the tool. We make categorization and mapping trivially easy. Create a new category. Map items to it (vendors, commodities, business units, whatever makes sense). Refine it whenever you like. Then use that category for the next analysis. Lather, rinse, repeat. Everything can be done in seconds, right away, with no offline processing, just like a spreadsheet.
How about dimensions based on other dimensions, with dependencies automatically maintained? Check. Dimensions derived from a filtered View, changing as the filter changes? Check. Dimensions derived from transactions separated in time? Check. Adding a new data feed? Check. Modifying the cube and maintaining custom private analyses, while continuing to receive data refreshes? Check. Building responsive models? Check. What about dimensions and measures requiring decision-making and algorithmic choices? Check; and you code in Javascript, not in some bizarre BI tool language that nobody knows.
The key to insight is mapping, mapping, then mapping again, until the conclusions – and the actions that should be taken – become as obvious as a Sesame Street puzzle.