An Insight into Healthcare Security

Healthcare Blogs Healthcare Webinar Client Case Studies Solution Use Cases  Healthcare Solutions  Request a Demo

The healthcare industry experiences more data breaches than any other, with finance, insurance, manufacturing, telecommunications, and retail following suit. Unfortunately, the value of medical identities means lucrative black-market business. Security experts now assume all healthcare organizations perimeters have been breached, with cybercriminals waiting for the opportune time to attack. It is just a matter of when.

Although there has been a significant rise in data breaches in this sector, security lapses cannot be attributed to one definitive approach. Medical records innately contain a wealth of information, so fetch a high price on the black market. However, the forces at play for healthcare breaches range from negligent end users to misconfigured hardware – factors that arise regardless of industry. The healthcare industry is in the spotlight FRdue to the nature of its information. If 30,000 records were breached in a different industry, people may not even blink an eye, at least not these days.

The consequences of a data breach can be significant. Anthem, the second largest insurer in the US, faces a $115 million settlement for a breach, which occurred in late 2014 impacting 78.8 million consumer records – personal data that included names, birthdays, social security numbers, email and postal addresses, employment information, and income. Internally, Anthem has spent an additional $260 million on security improvements. Although not much of a comfort, Anthem did offer free two-year credit monitoring services to all affected clients. In an unexpected turn of events, the California Department of Insurance stated with a ‘medium degree of confidence’ that the breach was executed under the instructions of a foreign government.

The US Department of Defense Inspector General recently reported that Defense Health Agency sites have failed to consistently implement technical, physical, and administrative protocols, and may have violated Health Insurance Portability and Accountability Act (HIPAA) violations in the process. These vulnerabilities ranged from password configurations for meeting Department of Defense requirements to user access, based on assigned duties. “As a result, ineffective administrative, technical, and physical security protocols, resulting in HIPAA violations, could cost Military Treatment Facilities up to $1.5 million in penalties each year,” the report states. According to this article, “What’s more troubling is that officials said when network administrators at the audited sites discovered vulnerabilities, they often failed to address them.”

The first HIPAA settlement involving a wireless health services provider occurred in May 2017. CardioNet, a provider of remote heart monitoring services, agreed to pay the US Department of Health and Human Services (HHS) Office for Civil Rights (OCR) $2.5 million after it was found to have compromised the electronic protected health information (ePHI) of 1,391 people. It seems the breach stemmed from an employee’s laptop stolen from a parked car.

in 2016, the HHS OCR levied its first fine against a business associate. Catholic Health Care Services, which provides management and information technology services to skilled nursing facilities, paid a $650,000 fine after protected health information (PHI) was compromised when a company-issued iPhone was stolen. The device was not encrypted and did not have a password lock.

St. Joseph Health (SJH) agreed to settle potential violations of HIPAA, following a report that files containing electronic protected health information (ePHI) were publicly accessible through internet search engines from 2011 until 2012. SJH will pay a settlement amount of $2,140,500 and adopt a comprehensive corrective action plan.

On a much grander scale and outside the healthcare industry, in June 2017, South Korean web hosting company, Nayana, was badly hit by a ransomware attack, taking down 153 Linux servers and making over 3,400 client websites unavailable. Basically, the company didn’t patch its web servers, leaving them vulnerable to intrusion. For whatever financial reason, Nayana paid the ransom, negotiated from US $1.6 million down to US $1 million in bitcoin. Nayana was then able to start recovering its data, albeit with a slew of data base errors. In addition to the time and money spent on the recovery, the organization had to give discounts and refunds to affected customers. Not all data could be restored, and clients were promised free hosting for life. On the other side of the coin, had the hackers not released the data then Nayana would have lost further millions of dollars.

No industry is immune. In 2017 Equifax saw an unprecedented impact on approximately 145 million people, which should act as a warning for those organizations that might have slipped into a state of apathy when it comes to network defenses. There have been revelations from two big companies – Uber and Yahoo – of older, devastating cyberattacks. Uber came under fire after revealing that it had deliberately covered up a massive cybersecurity breach in October 2016, which resulted in 57 million user records being stolen. The company covered up the entire debacle, including paying $100,000 to the hackers. In another shocking piece of news, Yahoo revealed that every single account in its database – all 3 billion of them – had been compromised in the 2013 security breach on the platform, making it one of the largest cyberattacks in history.

The rate at which valuable identity information is flying out of the control of firms is alarming – more than 3,500 records per minute. Around 23 percent of the top data breaches over the past five years contained consumers’ identity information – such as names, dates of birth, addresses, and account passwords. Staff negligence is responsible for most breaches, posing the biggest worry to security staff. Other issues, such as rapid cloud adoption and bring your own device (BYOD), which can seem like good ideas, raise major security issues.

The Business Issues
Data breaches in the healthcare industry are at an all-time high. According to the report Security Trends in the Healthcare Industry, authored by IBM, the industry has one of the lowest rates of data encryption, and lack of staff awareness remains one of its biggest threats. Insiders were responsible for 68 percent of all network attacks compromising healthcare data in 2017. As IBM explains, “The reason the healthcare industry is so frequently targeted is because health records contain a wealth of information that can be used for fraud. Health records typically contain credit card data, email addresses, social security numbers, employment information, and medical history records – much of which will remain valid for years, if not decades.” IBM also cites the fact that healthcare organizations’ cybersecurity measures are often out-of-date and ineffective.

In 2015, the healthcare industry got a wake-up call, as it had more than a million electronic health record (EHR) breaches, representing a 62 percent increase. 2017 was better, except for the aforementioned Anthem issue. Healthcare and financial services organizations need to be more than vigilant in the protection of individual privacy information, due to the personal data it contains. The estimated price for an EHR on the black market is around $50. According to the IBM report, this number is probably low, as EHR for sale in the black market will typically contain financial and other personal data, increasing its value.

What About the Cost?
According to the 2016 Cost of a Data Breach Study, the healthcare industry’s data breach cost is $355 per record, which is more than twice the mean cost across all industries, of $158 per record. The estimated loss to the industry is $5.6 billion per year. Considering the rising costs of breaches and ransomware demands, healthcare organizations can no longer ignore the ramifications of multi-million dollar fines caused by their own neglect. Since most healthcare organizations are non-profit, this will eventually become unacceptable to healthcare consumers. Ransomware has moved out of test mode and is now wreaking havoc on all organizations, regardless of industry.

Healthcare organizations are even more vulnerable. Unfortunately, many companies are paying to get their data back. This is a risky option, as there is no guarantee the data will be returned, and it may have been sold on the black market. According to statistics, 70 percent of business executives with experience of ransomware attacks had paid to get back data, with more than half paying over $10,000 and one in five paying over $40,000. Ransom demands increased by 250 percent in 2017 and are well over a billion dollars and rising.

Additional internal threats to healthcare security, which are somewhat unique to the industry, are posed by the internet of things (IoT), mobile health apps, and the cloud. The healthcare sector just is not keeping pace. For example, according to Ponemon Institute, “In just two short years, the number of US hospitals providing patients with the ability to digitally view, download and transmit their health information jumped from just 10 percent in 2013 to 69 percent in 2015.”

Healthcare consumers will become more demanding as the cost of healthcare continues to balloon out of control. From the perspective of the healthcare consumer, “Despite major medical and technological advancements in our country, and the fact that patients are more active consumers of care, healthcare is still inefficient, complex and unsatisfying for them,” said Tom Skelton, Chief Executive Officer of Surescripts. Consumers want more choice on how and where they receive care, through alternatives like telehealth, mobile, and other electronic means. More than half – 52 percent – of patients expect doctors to start offering remote visits, and more than one third – 36 percent – believe most doctor’s appointments will be remote in the next ten years. To achieve these visions, the healthcare industry needs to make great strides.

Outsourcing Electronic Health Records
Think you are safe by using a third-party vendor to outsource the management of electronic health records (EHRs)? Think again. According to IBM, “The stakes can be high in this area. A breach affecting a company with a large market share could compromise multiple millions of patient records at once.” In one attack, the weapon of choice could be credential stealing malware, allowing attackers access to systems and ultimately compromising over a quarter of a million healthcare records from more than a dozen organizations. Unfortunately, such an incident isn’t hypothetical. One of the largest healthcare breaches of the last five years was the compromise of a provider of software services to the healthcare industry that exposed data on almost four million individuals. According to the company’s statement on the breach, the ‘sophisticated cyberattack’ was detected 19 days after attackers gained unauthorized access to its network. Clients weren’t notified until almost a month after the attack began – ample time for cybercriminals to conduct nefarious activities with the unsuspecting patients’ information.

Insider Threats
According to Ponemon Institute in its 2017 Cost of Data Breach Study: Global Overview, “68 percent of all network attacks targeting healthcare organizations in 2016 were carried out by insiders, and more than one third of those attacks involved malicious actors.” The rest can be attributed to carelessness and preventable actions. A study by Skyhigh Networks found that 49 percent of content uploaded to OneDrive for Business contained a security vulnerability. For example, “From falling victim to phishing scams to misconfiguring servers to losing laptops, the mistakes and failings of an organization’s otherwise loyal insiders can often give attackers a wide-open gateway into its networks.” Most such users are employees. The average organization has 74 business partners as well as trusted third parties, including clients and contractors, with whom it conducts business. Collaboration should be secured at the content level not the document level, and take place in real time, to prevent negligence or accidental sharing of confidential information.

Verizon’s 2017 Data Breach Investigations Report indicates that 12 percent of employees will click on a malicious email, typically at around 3:45pm on a Thursday. Business email compromise (BEC) has risen 45 percent in only one year. These phishing email threats are delivered through malicious malware attachments and URLs, now their sophistication is bypassing even ‘next generation’ defenses. This well-disguised malware enables hackers to traverse networks and perpetrate thefts of information or money. End users need to be educated to look for URLs that raise a red flag. Often, these URLs will be similar to existing URLs with an additional character added, for example, Microsoft may appear as ‘mmicrosoft.’ These types of URLs are often overlooked.

The goal of IT is to identify and remove suspicious emails before they are received by end users, and quarantine the emails. But this rarely happens. Users should also be educated, and trained to be aware of suspicious emails so they do not open attachments. This also rarely happens.

Deprovisioning employees should be a straightforward process. When followed, this process can prevent data breaches caused by former employees. Some breaches can be quite costly. In 2017, Boeing disclosed a breach involving personal information for 36,000 employees. The cause? An employee forwarded a document to his spouse. In a more costly example, a disgruntled former IT administrator for Georgia-Pacific, a paper manufacturer that employs 350,000 people, wreaked havoc in 2014 by using a VPN to access company servers. The administrator installed his own software and proceeded to cause an estimated $1.1 million of damage.

According to a study by OneLogin, failure to deprovision employees from corporate applications contributes to data breaches at their organizations. The research found that almost half of the study respondents are aware of former employees who still have access to corporate applications, with 50 percent of ex-employee accounts remaining active for more than a day after their leaving dates. A quarter of respondents took more than a week to deprovision a former employee, and a quarter don’t know how long accounts remain active once employees have left the company.

Tackling the Basics – Solving the Problem with Metadata
The metadata, auto-classification, and taxonomy market is growing, as organizations realize that they can no longer cope with content overload. The majority of vendors in this market focus their efforts almost exclusively on improving enterprise search, and although improved search does yield business value, organizations must adopt a more comprehensive approach and understand the quantifiable business value of building an enterprise metadata framework that can be used and reused to solve any problem that depends on metadata.

The value of concept-based metadata generation, classification, and taxonomy tools enables organizations to face their most pressing challenges head-on, whether in on-premises, cloud, or hybrid environments. Merely resolving the most basic metadata issues can significantly improve hybrid search and enable concept-based searching. Beyond search improvements, capturing accurate and precise metadata provides enhanced data cleansing, protection of data privacy and sensitive information, migration, text analytics, and records management, as well as secure collaboration. In other words, concept-based metadata puts organizations on the right path to achieving enterprise information governance.

Compound Term Processing
The challenge of dealing with metadata is both obvious and elusive. The ability to harness the meaning of content requires tools that provide the ability to manage and retrieve content at the same rate that it is being created, ingested, and distributed. The fundamental component is the quality of metadata, which is required by many applications within an organization.

For solutions that use auto-classification, the classification is either highly general – for example, the department from which a document originates – or is dependent on end-user or system-defined metadata. Metadata created by end users is typically erroneous, subjective, or absent. System-generated metadata captures metadata at the highest level and is often not usable in a meaningful way to improve the identification of security vulnerabilities found within content. Without the ability to identify ‘concepts in context’ the hierarchical structure contains little value and, more importantly, the metadata is rendered useless to other security applications that could be improved.

Compound term processing is a new approach to an old problem. Instead of identifying single keywords, compound term processing identifies multi-word terms that form a complex entity and identifies them as a concept. By forming these compound terms and placing them in the search engine’s index, or providing the metadata to any application that requires its use, means the results delivered are highly accurate because the ambiguity inherent in single words is no longer a problem. As a result, a search for ‘triple heart bypass’ will locate documents about this topic, even if this precise phrase is not contained in any document. A traditional search will return all documents that contain the word ‘triple’, ‘heart’, and ‘bypass’, where each word has multiple meanings. A concept search using compound term processing can extract the key concepts, and use these concepts to select the most relevant documents. Content that shares the same concepts will be retrieved, even if the search terms do not match.

The conceptClassifier platform is built on a unique statistical concept extraction and insight engine that uses compound term processing, offering the most mature, adaptive, and scalable technologies available in the market today. These technologies enable the automatic generation of multi-term metadata, to drive accurate auto-classification and deploy intelligent metadata enabled solutions enterprise wide.

Back to Top

Because the conceptClassifier platform is a technology framework, it can be used by a variety of applications that are improved by highly accurate metadata. A discussion of some of the solutions delivered using the platform, along with a case study, can be found below. Additional solutions include secure collaboration, compliance and governance, eDiscovery and litigation support, General Data Protection Regulation (GDPR), engineering search, enterprise metadata repository, knowledge management, research, mergers and acquisitions, and text analytics.

Security of Content in Context
One only has to read the news to know that security breaches, both internal and external, are on the rise, and alarmingly so. The statistics are quite high for internal data leaks and breaches, and the healthcare industry leads the pack. Ponemon Institute’s 2017 Cost of a Data Breach Study: Global Overview study states that 69 percent of companies reporting serious data leaks said their data security breaches were the result of either malicious employee activities or non-malicious employee error. In fact, the main cause of data security breaches results from non-malicious employee error, at 39 percent. The study concludes that these breaches are typically the consequence of complacency or negligence, from insufficient control over access to sensitive or confidential data. Only 16 percent of serious data leaks were linked to hackers or penetration.

Even though 65 percent of employees believe it is their responsibility to ensure that sensitive company information is not leaked, only 32 percent of knowledge workers always clean files of hidden sensitive data before sharing. And 70 percent of those who forward emails with attachments without reading them first do not remove sensitive data before sending, while 80 percent of employees use unsecure file sharing methods, so are putting corporate data at risk.

If we consider unstructured data, it becomes highly likely that there would be breaches of confidential information. The solution, of course, is to ensure security and access controls are in use, up-to-date, and enforced. But that is not the real problem. Organizations need to evaluate the risk of ignoring the potential of data breaches based on privacy or confidential information being exposed from within content, either internally or externally.

The Impact of Security in the Cloud
Security is the primary concern for organizations contemplating a move to the cloud environment. Shadow IT is emerging as a critical problem. Shadow IT is the result of cloud IT not being able to provide business users with the services and apps they need in order to achieve the maximum degree of productivity. If they cannot get what they need from the corporate IT department, business users simply swipe their own credit cards and obtain the required IT resources elsewhere. In addition, a growing number of workers are storing confidential company information via unauthorized sharing and sync applications, such as Dropbox or Box.

The Concept Searching insight engine, based on compound term processing, can be applied to identify and protect privacy and confidential information. Many security products have some of this capability, such as identifying social security or credit card numbers from unstructured data. Concept Searching’s technologies enable organizations to define what is privacy and confidential information, using the unique terms of its own nomenclature. The extraction of multi-term concepts and descriptors, in the form of metadata, when content is created or ingested, enables organizations to proactively identify and protect content, remove it from unauthorized access, and prevent portability. Since the technologies are not limited by where the content resides – on-premises or in the cloud – content that needs to be secured can reside in diverse repositories and environments.

Privacy and Sensitive Information Identification Case Study
This healthcare organization was migrating over 40 terabytes of legacy content from SharePoint on-premises to SharePoint Online. It needed to ensure all patient privacy and sensitive information was protected according to HIPAA guidelines. Although migration was a top priority, compliance, information governance, and cleaning up legacy content was necessary. The final objective was to classify the cleansed content to the industry-defined MeSH taxonomy.

The organization chose Concept Searching technologies due to the ability to address all its requirements, and to provide a short-term and long-term strategy for managing content. The solution deployed provided the organization with one set of technologies to achieve its objectives of data cleanup, migration, protection of privacy information, improved enterprise search, and effective content management. Compliance and information governance was incorporated from the outset, encompassing the identification of a single source of truth, HIPAA adherence, enabling control and protection of critical data, and proactive risk management.

Protected health and sensitive information is protected in real time as content is created or ingested. Redaction capabilities are also available. The standard product comes with over 80 rules to address compliance requirements, including those related to HIPAA regulations. Subject-matter experts can easily and rapidly deploy rules to identify any string of content. Content that contains privacy vulnerabilities is automatically moved to a secure repository, prevented from download, and notification is sent to the appropriate personnel for disposition.

Content Optimization
Content optimization, often referred to as file analytics, provides the functionality to cleanse file shares through the classification of redundant, obsolete, and trivial (ROT) content, by identifying duplicates, versions, stale content, and dark data.

The issue with unmanaged data is that it can cause irreparable harm and significantly increase risk. Without visibility into the corpus of content to identify the most accurate and up-to-date information, decisions are being made on erroneous information. In fact, Gartner estimates that 80 percent of decisions are made using unstructured content. Analysts believe that 70 percent of content on file shares contains ROT that should be deleted or archived, 25 percent of content is duplicate, 10 percent has no business value, and an extraordinary 90 percent of documents are never accessed after creation, with 65 percent accessed only once.

The conceptClassifier platform performs a detailed file analysis and content inventory and, based on classification decisions, takes action on the content, enabling it to be either managed in place or automatically moved to a more appropriate repository. It is used to identify and protect unknown privacy or sensitive information exposures, GDPR exposures, automatically identify and declare documents of record, significantly reduce migration efforts, and provide concept-based searching in finding relevant electronically stored information (ESI).

Content Optimization Case Study
One client, a law firm, was migrating to SharePoint, and recognized that its corpus of content had become unmanageable over the years, due to growth and mergers. With terabytes of content, it was not feasible to manually evaluate the contents to determine the value. To effectively clean up its corpus of content, the firm needed access to what was contained in the documents, not a summary of information. It also needed a method to offer defensible deletion with full audit capability.

The firm chose Concept Searching’s content optimization solution, to identify duplicates, versions, and redundant, obsolete, or trivial (ROT) content, going far beyond the basic cleanup of file shares. The process identifies any data privacy or organizationally-defined sensitive information, undeclared or erroneously tagged records, and noncompliance exceptions. These additional capabilities provided the firm with the ability to identify sources of risk, some of which were unknown, and significantly reduce the amount of content to be migrated, as well as the server footprint required.

One of the unique differentiators the client took advantage of was the ability to create and customize the identification of sensitive or confidential information contained within content, through the taxonomy manager interface. Authorized users can rapidly create their own patterns for detection, using any verbiage or descriptors. Since the Concept Searching technologies generate semantic, multi-term metadata from within content, inter or intra-related content is also identified, even if it does not match the pattern specified.

Search performance was also significantly improved, due to culling content of no value and the new ability to perform concept-based searching, by providing multi-term metadata for the SharePoint search index.

Records Management
With compliance requirements changing rapidly and end user adoption cited as the biggest source of failure in records management, there is a great need to simplify and manage the process. Noncompliance can represent monetary loss as well as disrupt an organization’s business in a global world.

With General Data Protection Regulation (GDPR) in place, fines have risen significantly. More than 100,000 international laws and regulations are potentially relevant to Forbes Global 2000 companies, ranging from financial disclosure requirements to standards for data retention and privacy. Additionally, many of these regulations are evolving and often vary or even contradict one another across borders and jurisdictions.

The elimination of end user tagging provided by metadata generation capabilities represents a significant step forward in records management, provides high value, and reduces organizational risk. It is simply no longer realistic to expect broad sets of employees to navigate extensive classification options, while referring to a records schedule that may weigh in at more than 100 pages.

The enterprise metadata repository is the backbone of a solution. The approach is to develop a taxonomy that mirrors the file plan, where content will be auto-classified by identifying and assigning correct record identifiers and other organizationally-defined descriptors, optionally applying an appropriate content type, then either in-place or automatically routing to an organization’s records management platform. Intelligently automating the records identification process is the precursor to an effective records management deployment, and its absence often results in failure.

A successful records management system must suit an organization’s workflow, be easily adaptable by users, and be integrated into their daily activities, ideally transparently, as most users will not be able to remember, nor care, about what constitutes a document of record or the correct tags to apply.

Records Management Case Study
With over 200 terabytes of unmanaged content and offering over 300 discrete services, this government regional agency needed to improve the utilization of technology to automate many manual processes, and ensure that assets were being well managed, and services delivered successfully to 700,000 citizens.

No tagging was taking place, and with 18,000 employees this translated into a legacy of unmanaged, unsearchable content as staff left the agency. Repositories had grown through a decentralized IT function and a siloed approach to the delivery of services. Although policies and processes were in place, the organization wanted to take a proactive stance on information governance, and address issues relating to records management, eDiscovery, and poor information retrieval.

Once the Concept Searching technology platform was in place, the agency was able to tag legacy content with multi-term metadata, as well as tag all content as it was created or ingested. The auto-classification to an enterprise taxonomy provided a framework to identify privacy violations in real-time, begin online records management based on the structure, and improve eDiscovery.

A taxonomy that mirrors the records management file plan was implemented, and content was automatically tagged with appropriate descriptors and declared records. Records could be managed in place, automatically routed to the records management application, or routed to designated staff for disposition. This reduced organizational risk, ensured compliance, and eliminated the burden of tagging records from end users.

Intelligent Migration
Organizations in all industries are moving more information to the cloud. In particular, the healthcare and financial services industries should be following stricter guidelines to ensure cloud infrastructure meets stringent security best practices. For organizations with medium to large free text document collections, migration is no trivial matter and cannot be performed by human effort alone.

Concept Searching technologies enable intelligent automatic classification decisions to be made before, during and after migration. These decisions enhance organizational performance and drive down costs, but more importantly enforce corporate and legal compliance guidelines. After migration, one or more taxonomies can be used to enable concept-based searching and for enterprise content management.

Intelligent migration provides the ability to automatically generate multi-term metadata, auto-classify documents based on concepts, topic, or subject, route to one or more taxonomies, and easily identify and manage content of both high value and no value. Using the conceptClassifier platform, during the migration process a metadata repository is developed, to be used after migration to improve any application that requires the use of metadata. Concept Searching recommends using the content optimization process before, or in conjunction with, the migration.

Intelligent Migration Case Study
This organization was implementing SharePoint, requiring a migration. Instead of a forklift approach, which simply transfers risk from one system to another, the company’s objective, and challenge, was to expedite the migration, minimize cost overrun, scope creep, and improve the quality of content on SharePoint after the migration.

Using the conceptClassifier platformconceptClassifier for SharePoint, and conceptTaxonomyWorkflow, an intelligent approach to migration was taken. As content is migrated, it is analyzed for organizationally-defined descriptors and vocabularies, which will automatically classify the content to one or more taxonomies, and optionally to the SharePoint Term Store. conceptTaxonomyWorkflow will then process the content to the appropriate repository, for review and disposition.

This approach included indexing of content, including file shares to file shares, file shares to SharePoint, and custom actions from any other repository such as .NET code and web services. The migration process also had to address the security of documents as they were moved to their new locations. There were two imperatives here – not only to respect the existing security status and apply the same security in the new location, but also identify sensitive documents that may not currently be in a secure location. Assessing the security needs of these documents requires intelligent interrogation of their content, and then comparison with a number of relevant official taxonomies, such as PII, PHI, and ITAR. If a document is automatically classified against one or more of these taxonomies, it must be given the appropriate security profile.

General migration tools cannot safeguard document confidentiality, because they do not make intelligent taxonomy workflow decisions based on the text content of the document. If this security profiling is not performed during the migration to SharePoint, then many of these documents may be retrieved using SharePoint search, breaching the relevant document security obligations. Using conceptTaxonomyWorkflow, these documents are safely routed to any designated secure location, with the correct access rights, protecting and preserving documents during the migration process.

The solution enabled the company to achieve its objectives. The Concept Searching tools facilitated the migration, making it easy for both subject-matter experts and the IT team, removing the need for specialized training and its associated costs. The organization was able to clean up its corpus of content, resulting in significantly improved search after the migration, and the opportunity to leverage the metadata to improve a variety of applications.

Enterprise Search
Many organizations are aware they have a problem with information retrieval, and some are investing time and resources to improve it. In the SharePoint environment, there are tools to tune the search engine – involving resources, testing, reiterative tasks, and ultimately increased costs – but the fundamental problem of business users spending typically two and a half hours each day searching remains. Erroneous and often meaningless metadata populating the search engine index will always deliver irrelevant results. Without a third-party tool, SharePoint does not currently have the ability to automatically tag content, or auto-classify it to the SharePoint Term Store or a taxonomy product.

Compound term processing can be deployed to improve search and the end user experience by automatically offering a ‘search as you type’ feature, guiding end users with relevant extracted concepts. Automatic query expansion and the ability to retrieve relevant documents, even if they do not contain the actual words used in the query text, significantly improves search precision and end user experience. Compound term processing can be used for predictive coding applications, based on its ability to extract the key concepts found in one or more documents and then to use this list of concepts to deliver a ranking of all available documents, based on their relevance to the training set. Finally, compound term processing can also be deployed to drive ‘push features’ within SharePoint, such as the display of relevant people, news, and recent documents.

The ability to capture concept-based metadata, and retrieve relevant search results from within an organization and its diverse repositories, is the real currency of interoperability. Providing syntactic as well as automatically generated metadata enables the meaning of content to be represented and shared in an unambiguous and transparent manner.

Enterprise Search Case Study
This medical research university was seeking to improve information retrieval on its intranet, to enable end users to find relevant information more quickly. End users did not necessarily know the correct medical terms to use in order to find material, and were either retrieving too much information or incorrect information. Consequently, they were not only unproductive but were making business decisions using erroneous information or missing data.

conceptClassifier for SharePoint enabled the university to auto-classify content in diverse repositories, both internal and external, to generate semantic metadata, and to normalize complex medical terms to make it easier for site visitors to find what they are searching for. Also, all automatically generated metadata could be managed through the SharePoint administration interface, significantly reducing the complexity of the solution. conceptClassifier for SharePoint is fully integrated with content types and can automatically change or assign a content type based on the meaning found within a document.

Additional findability features of the solution included the ability to automatically suggest the appropriate terms for end users, eliminating the need for complex medical terms and associated syntax to be entered. The result is the ability to deliver relevant and precise information.

Staff at the university, regardless of their role, can now find the information they are seeking on the intranet, improving information retrieval and maximizing the investment in SharePoint. The solution improves productivity and delivers the right information to the right person at the right time. Within a healthcare setting, accurate and relevant information retrieval cannot be on a wish list, it must be a requirement.

Back to Top

Healthcare organizations can no longer deal with the increasing issues relating to information transparency – the breaches, compliance, and security associated with data. Information governance, General Data Protection Regulation (GDPR), and privacy are now key corporate challenges. Yet, for most organizations, these cannot be addressed until an enterprise metadata framework is in place, to leverage the inherent value contained in unstructured content, making it useful to a variety of stakeholders and to the enterprise as a whole, for diverse purposes.

Healthcare organizations can increase operational value, if they can overcome their apparent inability to proactively safeguard confidential information and prevent data breaches. This will continue to increase in importance as patients’ medical data becomes more portable, and available to authorized stakeholders as well as patients. Today, the typical healthcare environment poses major security risks.

A holistic approach to cybersecurity must be adopted by healthcare organizations, which considers their varied businesses activities. None are immune to data breaches or the exposure of confidential information. Healthcare must become more data driven, to enable intelligent decision making when it comes to cybersecurity. Maintaining the status quo is no longer a viable option. Information technology is changing at a rapid pace, as is medical technology. Healthcare organizations need to step back, re-evaluate, and resolve their security issues, before they become insurmountable. Ultimately, security is a vital component of any organization. Companies that fail to adopt a proactive security model will eventually suffer – jeopardizing stakeholder relationships, and negatively impacting the quality of healthcare.

Organizations that succeed will be those that recognize unstructured content can no longer be ignored. Concept Searching’s unique compound term processing capability, insight engine, and automated classification enable enterprises to address a wide range of challenges, improve business processes, and, ultimately, patient care.

Back to Top

The following information is a compilation of suggestions for healthcare organizations to consider or pursue. The author does not recommend any single option – many ideas are included to provide additional information to the reader.

Using Vendors and Cloud Service Providers
Healthcare organizations need to ask their third-party vendors the right questions. Did the software developers adhere to the expectations set forth in the Healthcare Information and Management Systems Society (HIMSS) Electronic Health Record Association (EHRA) EHR Developer Code of Conduct? Are vendors willing to sign comprehensive business associate agreements and be audited for compliance with Health Insurance Portability and Accountability Act (HIPAA) privacy and security rules? Are they willing to ensure the security of medical devices used within facilities and for patient care?

Is your cloud services provider compliant with the Health Information Technology for Economic and Clinical Health (HITECH) Act? Microsoft and other vendors offer a business associate agreement (BAA). Microsoft states that, “By offering a BAA, Microsoft helps support your HIPAA compliance, but using Microsoft services does not on its own achieve it. Your organization is responsible for ensuring that you have an adequate compliance program and internal processes in place, and that your particular use of Microsoft services aligns with HIPAA and the HITECH Act.”

With the impact of General Data Protection Regulation (GDPR), organizations need to follow much stricter guidelines, even if they are US-based organizations. Find out where your data is stored. Is it backed up? How often? Simple questions, but sometimes overlooked. What cybersecurity guarantee is in place for the cloud, not the applications? Is it in writing? Organizations may also want to include clauses on remediation. In the Anthem breach, Anthem paid $17 million for two years of credit monitoring, as well as setting up a pool of an additional $15 million for out-of-pocket expenses for reimbursement. And that does not include the settlement.

These questions are only preliminary. Organizations must do a comprehensive evaluation of third-party vendors and cloud providers. The answers will show which vendors fit with an organization’s cybersecurity program, and minimize its exposure to breaches and the damage they cause.

Assure Data Security and Privacy
Healthcare organizations are responsible for protecting all sensitive and confidential information, such as protected health information (PHI), electronic protected health information (ePHI), HIPAA and HITECH compliance guidelines, and the Red Flags Rule. Sensitive information is typically unique to an organization. It will consist of protected health information, but can also include scientific and technical data, financial information, possible mergers and acquisitions data, or clients’ non-public financial information. Each enterprise should carry out an inventory, identify all sensitive data, and put the necessary processes in place to protect it. Each priority item should be guarded, tracked and encrypted, as if the organization’s survival depended on it. In some cases, it may.

According to a study by Neustar, only 6 percent of healthcare organizations had a disaster recovery plan for big data, hybrid cloud, and mobile. And 50 percent had no plans for any of these vulnerability exposures, with 62 percent considering these environments hard to protect. Healthcare organizations, including patient facilities, have alternative actions for continuous patient care, specifically in emergency environments. Assuming a plan is documented and tested, patient care facilities have been through the ‘what if’ scenarios. Although developing a disaster recovery plan is complex, the advantages far outweigh the documenting the plan. There are two choices, an organization can wait until an event occurs then write the plan, or utilize a more proactive approach and take the time and care required to proactively address recovery before a disaster happens. Most probably, it will happen.

The HIPAA Security Toolkit Application, developed by the National Institute of Standards and Technology (NIST), is intended to help organizations better understand the requirements of the HIPAA Security Rule, implement those requirements, and assess the implementations in their operational environment.

Key action items include:

  • Conduct or review a security risk analysis per 45 Code of Federal Regulations 164.308(a)(1) of the certified electronic health record (EHR) technology, implement security updates, and correct identified security deficiencies as part of the risk management process.
  • Identify the ePHI within the organization – this includes e-PHI that an organization creates, receives, maintains, or transmits
  • Identify the external sources of ePHI – for example, do vendors or consultants create, receive, maintain or transmit ePHI?
  • Review HIPAA security gap assessments and address key issues—develop an implementation approach and plan and a common operating framework or a measurement framework
  • Enforce IT and operational policies vigorously, up to and including termination of employment

Organizations must also evaluate the use of the cloud, although intertwined, a separate issue when outsourcing medical records. Questions may include details on the cloud facility – is it a private cloud, Amazon Web Services (AWS), Microsoft, other? Where is data located? Organizations will need to follow the laws of each country on the storage of personal information on local servers.

A Dell EMC study suggests data loss and downtime costs a total of $1.7 trillion each year. What can you expect? On average, businesses experience 25 hours of unexpected downtime, 36 percent incur revenue loss, and 34 percent have delays in product development. The value of stolen information on the black market is estimated at $120 billion – well worth cybercriminals’ time and effort. The average cost of a data breach is $4 million, and an additional $3.3 million for brand damage.

Reducing Ransomware Attacks
No organization in any industry can be too vigilant – ransomware is on the rise and the bad actors are getting greedier, having tested the waters and found that over half of organizations attacked by ransomware will pay the price. The problem, of course, is will your organization trust the actors to release the data? In the case of Nayana, as mentioned earlier, it was required to make three payments and was not allowed to unencrypt data until after the second payment. Logically, Nayana could have paid the US $1 million in bitcoin and never been allowed to retrieve the data. It is naïve to think that the threat of ransomware will not increase, becoming more sophisticated in attacks, demands, and costs.

Here is an initial list of actions you can take to reduce the risk of ransomware:

  • Regularly patch your systems for newly found vulnerabilities
  • Run penetration tests to discover holes in your security perimeter
  • Conduct a cyber health check to assess your company’s cyber risk exposure
  • Train your staff to recognize phishing emails, which are often used as means of delivering ransomware
  • Adopt an ISO 27001-compliant information security management system (ISMS), to improve your company’s security posture

According to IBM research, healthcare employees are the least educated on security procedures and processes, even at the fundamental level of not knowing the basics of creating unique and individual passwords. It is no wonder that the healthcare industry has such a dismal track record in cybersecurity.

Human error, including falling for phishing attacks, is the leading cause of major security breaches today. Healthcare systems should regularly remind people of the importance of information security best practices, through mandatory training, strategic reminders, and other means.

To defend employees, it is ultimately the responsibility of an organization to conduct and enforce security fundamentals, train on security procedures and processes, and provide mandatory ongoing education for employees. Conducting a simple Google search retrieves multiple vendors that provide security education, with class-based and standalone training.

Update HIPAA
The HIPAA Security Rule and the HIPAA Privacy Rule are already well-known frameworks for defining how a healthcare organization should secure people, systems, data, and equipment, in the same way the Payment Card Industry Data Security Standard (PCI DSS) for debit and credit card security is employed. These established methods of approaching healthcare security would merely need to be updated to cover new forms of cyberattacks and new tactics employed by cybercriminals.

Purchase Insurance
According to Christopher P. Skroupa, in an article posted in Forbes CIO, “Cyber insurance is in its infancy, and as such, organizations need to understand how much cyber insurance they need. Target had $100 million of cyber insurance and has over $450 million of loss today, which is estimated to total at $1 billion by the end of 2017. This isn’t a little off — it’s way off. Cyber insurance is a tool to transfer risk that needs to be correlated to cyber risk.”

Many financial services organizations have cyber insurance, and healthcare organizations should evaluate the pros and cons. Since this is a relatively nascent kind of insurance, most leaders of healthcare organizations and boards of directors may not be aware that it exists. Significant open questions about it remain, including who should pay for such policies and whether it should protect the institution, the patient, or both. At the moment, the institutions themselves are paying, and it is likely this will not change in the foreseeable future.

Protect Supply Chains
Hospitals and healthcare organizations have diverse supply chains, and massive lists of vendors with whom they digitally interface. Care providers must understand the many moving parts that are involved and protect their relationships and information exchanges with and among those groups. Any contracts that specifically state who is responsible when a breach occurs should be reviewed. This may have been done, if an organization has become GDPR compliant. The average number of partners for an Office 365 client is 74. Not only should the legal aspects be reviewed, end users should also be trained on sharing information that may contain privacy or sensitive information. For example, the Target breach was caused by a heating, ventilation, and air conditioning (HVAC) supply chain vendor.

Share Industry Best Practices on Cybersecurity 
The Financial Services Information Sharing and Analysis Center (FS-ISAC) has made life easier and safer for the financial services sector, by enabling peer financial institutions to share information rapidly and directly. Similar groups, such as the National Health Information Sharing and Analysis Center (NH-ISAC), can serve as starting points for expanding similar types of discussions and planning.

Deploy Strong Authentication
Healthcare systems should use multifactor authentication, or other types of consumer security that are already ubiquitous in the US financial services arena.

Adopt Tokenization
Tokenization, which involves substituting sensitive data with other unique but non-sensitive data, has been in vogue in the credit card world for the past few years. It is a suitable way to protect data in situations in which a consumer, such as a patient, is involved in some type of card-based transaction. This may involve using a flexible spending reimbursement card or paying a healthcare related bill online.

Copy the Chip Card Approach
US consumers first encountered chip cards in a significant way in early 2015, when card issuers began to widely distribute them. Much of this was done in the run-up to a shift in the definition of who was liable for fraud. Traditional magnetic stripe cards require a signature for security purposes. A chip card adds an additional layer of sophisticated fraud protection, through an embedded microchip that turns cardmember information into a unique code, when used at a chip-enabled terminal, which is difficult to duplicate or copy.

Experiment with Blockchain
Blockchain technology can record transactions between two parties, efficiently and in a verifiable and permanent way. It is being used in financial services as well as other areas. For example, after Estonia suffered a significant cyberbreach in 2007, the country became more aggressive about protecting its society, and is now using blockchain to protect its citizens’ medical data. A number of blockchain-based identity credentialing systems exist, including Guardtime, TruCred, Civic, and Onename.

Consider Biometric-based Security
Biometrics are increasingly being embraced as the ultimate in identifiers. Startups, such as Simprints and RightPatient, are testing their value as verification features for electronic medical records. Biometric security devices measure unique characteristics of a person, such as voice pattern, the iris or retina pattern of the eye, or fingerprint patterns. With biometrics, it can be extremely difficult for someone to break into a system.

Back to Top

Concept Searching
Concept Searching