Blog / Specialized Software Struggles at Analysis

Specialized Software Struggles at Analysis

Like many other enterprises, you may have a data analysis challenge. You’ve got valuable (and voluminous) business data, but it’s living in different places, in various formats, with differing semantics. Aside from an occasional insightful spreadsheet analysis on a subset of the data, there’s been little progress toward extracting that value.

Specialized (and costly) software applications attempt to deal with this problem, and, in general, fail spectacularly. Examples include ERP systems that pretend to do everything (and don’t), CRM systems that record information but fail to analyze it, and bespoke spend analysis systems that provide pretty dashboards and zero analysis capability.

Enter a big consulting firm, or perhaps your own IT department, with the latest Big Idea: the “data lake.” Just put all of your business data into one big database, and then it’s easy, right? Now you’re just an SQL query away from insight into pretty much anything. Unfortunately, that idea doesn’t work, for lots of reasons. For example, much of your data won’t make it into the lake, because… didn’t we just say it’s living in different places, in various formats, with differing semantics? That problem has to be solved before the data can even be considered for the lake.

But there’s a more profound problem with the data lake idea, and it’s this: the business data that’s valuable is likely to be millions of rows of transactional data, with lots of associated detail. Relational datatabases (the “lake”) are terrible at querying transactional data. That data has to be extracted and reformatted into an OLAP cube (for example, a BI tool), and only then can it be interrogated in any reasonable way.

Unfortunately, OLAP cubes and BI tools are terrible at actual analysis. Unlike the flexibility you have with a spreadsheet, you’re stuck with a fixed schema. You can’t create new dimensions (new columns that segment the data further) or change data mappings or hierarchies. You can’t create reactive models, where changing an input or a mapping will pivot and refresh the whole model. So now what? You’re back to the occasional insightful spreadsheet analysis of a subset of the data, extracted from the BI tool.

Let us help.

Spendata offers a way forward. We didn’t start out to build a multipurpose business data analysis tool, but by focusing on the data analysis problems confronted by our users, we ended up solving the conundrums outlined above, as well as others. Spendata can handle millions of rows of data, but it can also build new dimensions and re-map data on the fly, automatically updating its internal schema (which users don’t see and don’t worry about). Spendata maintains audit trails and auditability with database rigor, but it maintains dependencies like a spreadsheet, allowing for real-time reactive modeling at database scale. It can load data of different formats and different semantics and keep those transforms active for subsequent refreshes.

Let us demonstrate how your data analysis challenges can be overcome, and how your team can effectively explore the immense value buried in your business data. We can show you using your machine, with your data, while you watch. That’s because Spendata runs securely on your machines, behind your firewall, and never moves your data anywhere. You don’t have to worry about GDPR, HIPAA, data sovereignty, or other security issues, because Spendata processing is performed locally, with no server involvement.

Questions? Contact us

Please do not enter anything into this field
*
Privacy Promise

Related Posts

Closing the Analysis Gap