Software normalisation (or recognition as it’s also known) isn’t particularly new, but it’s becoming an increasingly hot topic from an ITSM perspective, driven by demands for normalised software data to populate the CMDB.
When it comes to a normalisation service – and the database of information which underpins it – there are certain nuances which need to be considered. So it’s timely to take a closer look and dispel some of the myths.
Why Software Normalisation is Needed
Capturing an inventory of all the software installed across an IT estate returns a vast list of complex and – to the uninitiated – confusing data. Transforming this raw and ‘noisy’ data into meaningful information is a complex, resource-intensive task. Of the many thousands of software applications in a typical IT estate, it’s usually only hundreds that actually offer any commercial exposure, clarity on which is vital for organisations to ensure compliance. Being able to decipher this data to build a list of licensable software (including associated details, such as publisher, product, version, edition, release date, upgrade/downgrade rights etc.) is a major challenge.
In the majority of cases, it simply doesn’t make economic sense for organisations to undertake the work required for software normalisation themselves using internal resources. This has led to many solution providers developing software normalisation services underpinned by a comprehensive database which is effectively used as a source of reference to decipher inventory data and identify licensable applications.
Database Size is Irrelevant
When it comes to promoting the effectiveness of their normalisation capabilities, too many providers focus on the size of their software database, making a song and dance about the fact they have hundreds of thousands of entries. However a vast amount of this data isn’t really commercially relevant. While it may be of use from an operational or technical perspective, the value offered in terms of licensing and compliance is limited. A more meaningful measure might be the number of licensable applications actually included within the database. But that measure on its own still isn’t sufficient. We have to consider quality.
Quality is What Really Counts
What if the process for identifying, categorising and recording those licensable applications within the normalisation database isn’t rigorous and consistent? What if unskilled resource is used instead of experts with the right levels of knowledge and experience? The end result will be inaccurate entries and what use is a massive database if it’s full of rubbish?
In Certero’s experience, the value of understanding how the software vendors themselves define their products is essential for developing an effective normalisation database. Most vendors utilise SKUs (stock keeping units) which are each vendor’s definitive identifier for each software application. Hence using these SKUs is the only way to ensure zero ambiguity and accurately populate the normalisation database.
Some providers choose not to utilise SKUs in their normalisation database, instead opting to create their own definitions for different software. This way of working was established at a time when software normalisation was new to the market and offered a quick solution to market demand. While initially providing a good resource, it has become apparent that this legacy approach – which isn’t necessarily accurate – is limited in meeting the demands of today’s requirements.
Obviously there are some vendors who don’t use SKUs. In these circumstances the normalisation provider has to conduct research in order to establish the necessary information for the particular application in question and thereby accurately populate the database. Again this highlights the need for skilled resource and high quality process.
UNSPSC (United Nations Standard Products and Services Code)
Other developments have seen the requirement for normalisation to be able to classify installed software in line with UNSPSC application categories. Again, doing this accurately requires a rigorous, consistent approach and an element of understanding. Shockingly, we have come across numerous instances of providers’ normalisation databases containing glaring inaccuracies.
An Example
Let’s consider the case of a customer who, for example, is only interested in gaining a clear, normalised view of their Microsoft estate. All they need from their chosen solution provider is assurance that their database contains the information necessary to accurately identify and normalise all instances of their deployed Microsoft applications. The fact that one normalisation provider has a database with millions of entries, while another provider’s only has thousands (I exaggerate for effect) is of no relevance whatsoever. The point is that either of them could be the one with the most accurate data. The only way to be certain is to look at the actual quality of the data itself.
In Conclusion
So what does this all boil down to? Basically the common practice of using database size as a measure of a normalisation provider’s capability is flawed. If you want to make sense of the software installed across your estate and ensure licensing compliance/optimisation, our advice is to ask your provider what processes they use in generating and maintaining their normalisation database. A rigorous, consistent approach, skilled resource, use of manufacturer SKUs, correct UNSPSC categorisation and comprehensive coverage of commercially licensable applications are absolute essentials. Remember that accuracy far outweighs size and, ultimately, your objective should be to work with a provider who supports and understands the importance of quality over quantity.

