Is Your AI Application Sassing You?
I do believe artificial intelligence (AI) is probably somewhere in our future. Not the near future. It is a Pandora’s box just waiting to be opened. I’ve written some blogs about when AI goes awry, and readers get a good chuckle. But there is a very serious side to AI. I think most organizations see the end point, not the ramifications, of AI outcomes.
The appropriate planning will include the reasoning of your AI project. You, meaning the organization, will need to explain it. AI can learn many things from data, including the wrong things. Standard setters and regulators look for that ‘explainability’ in many instances – I think they made up that word. AI will need to make probabilistic determinations, which may not be understood or represent an error in reasoning.
Pressure is growing to open up black boxes and make AI explainable, transparent, and provable. In other words, you will need a risk framework. Data must be decodified to remain silent on legal, compliance, and ethical landmines. Cyber intrusion of AI systems can have catastrophic consequences. Data could be compromised, corrupting machine learning.
Your risk assessment should include performance, security, and control, along with ethical, societal, and economic components as they relate to the AI system. If you don’t have a plan that addresses these components in detail, I suggest you go ‘back to the drawing board.’ Many healthcare organizations are putting AI on the back burner until they address the societal and ethical component of AI.
When all is said and done, who wants a sassy AI system?
If you are looking to be on the cutting edge, not the bleeding edge, and would like to tackle your metadata problems, we still remain unique in the industry with our ability to generate multi-term metadata. Want a third-party opinion? Read more about us in this file analytics report by an independent research firm.