Blog / The Practical Use of genAI

The Practical Use of genAI

The newest genAI models can provide useful spend analysis mapping results when handled correctly. Unfortunately, genAI is by definition “generative,” which means that it’s inclined to make stuff up. Recent examples include fictitious legal citations, plausible quotations from mythical industry analysts, and a stubborn insistence on the Copenhagen interpretation of quantum mechanics.

Potentially incorrect or invalid results must be managed, or the results as a whole are not useful. The key to managing results of uncertain correctness is provenance. In Spendata, we track the provenance of every mapping rule for all time. We know how every rule affected the spend map, and when. We know who or what AI created the rule and who approved it. We provide a rule-vetting UI that enables users to quickly mark rules as correct, or reject them as not useful. This enables users to manage genAI mapping effectively.

We use a combination of different AI techniques to obtain the most accurate results. We use traditional AI (heuristics and knowledge base) to generate reliable and curated mapping rules for a large percentage of spending. Then we provide a real-time genAI mapping interface that allows users to address the remaining unmapped spending with genAI-created mapping rules, and approve the rules with a custom UI. In every case, users have visibility on how decisions were made and can override any or all of them.

Once the user makes a decision, that decision sticks, unless the user changes their mind in the future. That’s because it is critically important that the system’s results do not appear to be “shifting sands”, as is common with re-applied AI. When users review their data month after month, the logic applied to their spending must stay consistent and not change as the result of the inherent randomization of a genAI model’s output, or because of an update to the model itself.

Ronald Reagan was fond of saying “Trust, but verify.” We do better: we automatically approve rules when the originator is trusted, and mark rules that need approval when the originator is less trustworthy. Then we provide a powerful review mechanism, recording the reviewer.

The end result is controllable and usable AI: it’s AI with provenance.

Questions? Contact us

Please do not enter anything into this field
*
Privacy Promise

Related Posts

Fishing the Data Lake

A New Way to Slice and Dice