The Cost of Bad Data – 12 Percent of Your Revenue

News Upcoming Webinars Trade Shows and Events Press Releases Newsletters Blog
Content Optimization

The Cost of Bad Data – 12 Percent of Your Revenue

If your organization is like most, it has bad data. This usually isn’t considered a priority by CIOs, but more organizations are waking up to the fact that due diligence is required in cleaning up data. After all, it is what you use to make decisions.

Typically, I don’t like to reference figures that apply to the whole world, but IBM estimates the cost of poor data at 3.1 trillion. And Experian’s findings show that bad data has a direct impact on the bottom line of 88 percent of American companies, with the average company losing around 12 percent of its total revenue.

These numbers paint a very real picture of the negative impact bad data has on our economy. Part of the problem is that business users will often just make corrections when using data, but don’t take the time to notify the creators that the information is incorrect so that it can be fixed. This poses a big problem in the life of the data.

Now let’s look a bit more closely at the mix of unstructured and semi-structured data. Did you know that less than 1 percent of your unstructured data is analyzed or used at all? What’s worse is that unstructured data represents the majority of your data – IBM estimates up to 80 percent of your data is unstructured. How do you identify the value of these assets and use them to your competitive advantage, reduce risk, and keep the corpus in prime shape?

We recommend you cleanse your data first – we call this content optimization. Having had a lot of experience, we can guarantee that you have unused, worthless, and redundant, outdated, or trivial (ROT) content across your repositories. What can our conceptClassifier platform do to clean up and optimize your unstructured, and structured, data?

  • Provides the ability to dedupe, and to identify copies and versions of content that have been unused, enabling the elimination or archiving of documents that have a negative impact when decisions are made using erroneous information
  • Removes ROT, and low-value content
  • Identifies privacy or sensitive information exposures
  • Organizes and optimizes high-value content, offering real insight
  • Enforces and remediates policy violations, reducing noncompliance
  • Applies and enforces governance policies, in real time
  • Cleanses content, offering defensible deletion with full audit capability
  • Designed for subject-matter experts, with a highly interactive, real-time, intuitive interface for rapid deployment and maintenance, quickly achieving business benefits

So, if you are still with me, you are going to say no big deal, there other software solutions that can do pretty much the same thing. Except for one difference. We automatically generate compound term metadata, which means multi-term metadata consisting of subjects, topics, and concepts found within the content itself are identified and auto-classified to one or more taxonomies. It certainly helps to know what’s in your content before you take action on it.

According to Forrester, file analytics – our content optimization – is quickly becoming a high organizational priority. Is your organization concerned about the rapid explosion of unmanaged unstructured data? Does your organization recognize it as a risk?

Join us for our Discovery, Risk, and Insight in a Metadata-Driven World webinar, on Wednesday, June 13. Discovery, risk, and insight mean something different to every organization, even at different locations within the same company. This webinar shows the automatic generation and use of semantic metadata, to gain a detailed view of risk mitigation for data security, compliance, and operational intelligence.