Poor Data Quality – The Cost May Surprise You
In a perfect world, all data would be accurate, timely, in other words, always perfect. Unfortunately it isn’t a perfect world. In a SAP survey, conducted by Forbes, the majority of respondents said the impact of poor quality data on the bottom line exceeded $5 million. Nearly one-in-five (18%) estimated the annual cost to be more than $20 million.
According to IDC, 80% of all data is unstructured, 60% of documents are obsolete, and another survey reported that 50% are duplicates. No wonder data quality is an issue. What is curious is despite the seemingly wide spread awareness of the financial impact of poor data, most organizations still appear to be at a loss as to what to do to fix the problem.
At the end of the day, the knowledge worker is still ultimately responsible for the accurate tagging of unstructured content to achieve day-to-day reduction in errors. A serious impact of the problem is the inability to make accurate decisions if the data is not trusted or reliable. The ripple effect reduces the ability to achieve operational efficiencies, and incur a potential loss of competitive advantages. Not to mention legal and regulatory non-compliance issues.
For years, the same topic goes around and around for discussion. The Human Dimension is probably an organization’s biggest weakness and its biggest strength. From an information access and data quality perspective, what goes in – comes out. Removing the end user from the tagging process removes the ambiguity. It also enables content to be related in a meaningful way without end user involvement. This enhances the value of knowledge far beyond the original design intent, and expands the value of content to be accessed and used by multiple stakeholders who may or may not have known the content existed. This also transforms the content into a knowledge asset as the data can be trusted, reliable, and is correct.