Poor Data Quality – Do you measure it?
Data quality, including unstructured, can be a source of pain in the pocketbook for companies. But do companies evaluate this as a bottom line cost? I don’t think so. We jump through hoops showing the value of our software and the impact on the bottom line (which is fine) but turning the coin do companies actually look at the cost of wrong, erroneous, irrelevant information that result in bad decisions?
In a recent blog, When Poor Data Quality Lands on the Ledger, the author references a recent Mental Floss article, “10 Very Costly Typos,” including the 1962 $80 million dollar missing hyphen in the programming code that led to the destruction of the Mariner 1 spacecraft, the 2007 Roswell, New Mexico car dealership promotion where instead of 1 out of 50,000 scratch lottery tickets revealing a $1,000 cash grand prize, all of the tickets were printed as grand-prize winners, which would have been a $50 million payout, but $250,000 in Walmart gift certificates were given out instead, and, more recently, the March 2013 typographical error in the price of pay-per-ride cards on 160,000 maps and posters that cost New York City’s Transportation Authority approximately $500,000.
Perhaps humorous, I can assure you they did not put smiles on the faces of the organizations. And they all have to do with plain and simple proofing. However, organizations continue to waste time searching for information they can’t find, re-creating the information they couldn’t find, and then using the bad information to make decisions. Does that impact the bottom line? Can that approach be costly? You bet.
Does your organization spend time quantifying the cost of poor data – whether structured or unstructured? If so, how?