Bad data is like fake news – you can’t trust either?
It’s fascinating how our industry continuously comes up with new terms for something that is pre-existing. Sometimes it is justified because the new names cover additional features.
A good example is big data analytics. For many years it was simply called business intelligence or BI, and it was used primarily for marketing purposes. That was probably because it was not easy to access, let alone extract, relevant data from all the non-related network elements, switches and servers.
The data was even stored in different formats, anything from hexadecimal to straight ASCII. No wonder the most usable information came from billing systems that were able to decipher data from network switches and store it in usable formats.
No surprise also, is that the earliest forms of data analytics came from auditors tasked with determining the accuracy of those systems. From there, today’s sophisticated revenue assurance (RA) and fraud management systems (FMS) evolved. When you take into account the billions of records processed daily by telco operators worldwide you’ll quickly understand why it soon became ‘big data’.
What was traditionally batch processing soon evolved into real-time and as more services were offered the amount of data rose proportionally. All this only made possible by the dramatic drop in the cost of storage, servers and processing power. Today, that same data is being used for every conceivable purpose from network management to improving the customer experience.
We are now, rather cleverly, introducing machine learning and artificial intelligence (AI) to use that data even more effectively - not just to analyse what happened in the past but to predict what will happen in the future.
That’s all well and good if the data is accurate, trustworthy, untainted and from reliable sources. It’s not good enough any longer to accept that all data has integrity. A great analogy is the current furore over ‘fake news’ and the questions being asked as to whether we can believe everything that is published, even from reliable news services.
Just as revenue assurance and fraud management systems led the way in the functional use of raw data, today they lead the away in determining if the data being processed is reliable.
Data integrity is now, more than ever, being seen as key to any analytical process. Incomplete, incorrect or invalid data will lead to undesirable and potentially dangerous results when used as the basis for any AI activity.
Most telecom operators that have RA systems in place know their value in providing relevant data but many may not realize that companies like WeDo Technologies have evolved their systems to ensure the Integrity of the data is also met. It is not easy to place a value on this but, like the days before RA systems were introduced, the cost benefits were not tangible until a major breach occurred.
To assess integrity, aggregated data is collected from several measurement points along the revenue chain and reconciled using historical, cross-system and threshold validation rules. The same process applies to fraud management which is also time-sensitive. Verified data combined with an analytical tool such as WeDo’s RAID FMS empowers fraud managers to build their own data mining models tailored for specific needs, helping them find unusual patterns and correlations.
By combining insights gathered from machine learning with the power of an extensive rules library, an operator is able to address all kinds of fraud, delivering accurate and actionable results in mere milliseconds and the same data can be used for a multitude of other analytical processes.
Data integrity may not get the same headlines that big data, machine learning and AI garner, but without it all the rest will deliver the exact same benefits as ‘fake news’!
Let me know your thoughts and please feel free to Contact Us should you have any questions.