Artificial Intelligence – Utopia or Inferno? You Decide

News Upcoming Webinars Trade Shows and Events Press Releases Newsletters Blog
AI - Artificial Intelligence

Artificial Intelligence – Utopia or Inferno? You Decide

What is the future of artificial intelligence (AI)? Is it a doomsday scenario where we are creating machines that will take over mankind? Or will machines overcome us and we will become their pets? Seriously, there is a religion in California that believes that about us as pets. Better than being treated like livestock, so they say. And no, I am not one of those Chicken Littles, crying out that AI is ‘bad.’  

It will probably be the cybercriminals who get AI right. Gartner predicts that by 2022, 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them. Well now, isn’t that great news. Here’s hoping an autonomous object isn’t ever flying a plane I’m on.  

We can be ruled and governed by machines that have no clue what they are doing – sort of like now. We can sleep easy though, Google is going to ‘get it right.’ Its Chief Executive Officer, Sundar Pichai, in a blog post outlining the company’s AI policies, noted that even though Google won’t use AI for weapons, “we will continue our work with governments and the military in many other areas, including cybersecurity, training, and search and rescue.” What does that mean? 

We can all laugh at Microsoft, whose Tay went from innocent chatbot to a crazed racist in just a day, corrupted by Twitter trolls. Do you think Microsoft knew that was going to happen? Of course not. And regardless of what you think about Microsoft, Tay probably grew up with a lifestyle of comfort, where no expense was spared, got in with the wrong crowd, and cracked in 24 hours. 

Or do we jump on the bandwagon over losing jobs? Not so, they tell us. What about the factory in China that has gone from having 650 employees to 60, and increased its productivity by a whopping 162.5 percent, and reduced defects from 25 percent to less than 5 percent? Won’t happen in the US? Don’t count on it. And what about autonomous soldiers? Is that one ok?

Stephen Hawking, Elon Musk, and hundreds of other scientists and technologists signed an open petition calling for deeper research into the potential risks of developing artificial intelligence. And with good reason. Hawking came out saying that AI could be the end of humanity. I don’t know about you, but I think Stephen Hawking’s gray matter was a bit grayer than mine. What did he see that we don’t? 

And when things go wrong, and they will, what about the folks who deem the robot itself is liable? After all, there has already been some discussion about AI personhood and possible criminal liability of AI systems. Ok, stop right there. Enough. 

If you are looking to be on the cutting edge, not the bleeding edge, and would like to tackle your metadata problems, we still remain unique in the industry with our ability to generate multi-term metadata. Want a third-party opinion? Read more about us in this file analytics report by an independent research firm.