Security in the cloud is a consistently discussed topic. But what about your social media activities? How do you train your users, identify non-sanctioned information, and what is your damage control plan? Too much effort you say. Loose lips sink ships. I’d think twice about that.
In an article posted on CMSNewsWire, Matthew Brodsky, the author focused on the liabilities associated with social media. Interesting topic, specifically since Americans are sue happy. The article, Getting Ship-Shape on Social Media Risks was somewhat of an eye opener, at least to me.
According to the article, “there’s the exposure from vicarious liability, which holds companies liable for what their employees say on social media, whether that’s in the performance of work duties or during their downtime on personal computers on social activity that may or may not be related to their employment.” Let that sink in for a few minutes!
It’s not only the employees. Brodsky recommends involvement with the organizations marketeers who must be up to speed on social media that violates advertising laws and regulations.
In an interview with Ethan Wall, of The Social Media Law Firm, Mr. Wall gave a fictional example of social media gone wrong. “Social media legal risks are not strictly limited to irresponsible teenagers. Just last monththe Securities and Exchange Commission filed securities fraud charges against a trader whose false tweets caused sharp drops in the stock prices of two companies and triggered a trading halt in one of them,” Wall said. The trader supposedly created fake Twitter accounts, making them appear like the accounts of known securities research firms, then tweeted false statements about two companies.
Despite these risks, the known unknowns and unknown unknowns, the corporate investment in social media will of course not slow down, but perhaps a focus on quality the quality that is needed. And don’t forget to consider technology that will proactively identify the threats before they happen. Our products will identify and automatically generate semantic metadata (concepts in context) and auto-classify the content to a taxonomy. Any content that is created or ingested can be included. This catches the breach before it happens.
In the above scenario, the objective was not to harm but to improve his sales – in a stupid way. What about the employee who does seek to harm the organization with an inappropriate tweet? How will you stop it from happening? How will you stop the damage after the fact?