What is cyber security going to look like in the years to come? What are the problems with the current approach and keeping the pace set by today’s threats? The reality is that the challenges with securing the network need to be augmented with a strategy to specifically protect and restrict the organization’s most valued asset – the data. The challenge we face is allowing access to the network and the data that it transports while governing how this data is accessed. Data Loss Prevention (DLP) helps to control the flow of data and avoid unwanted data exposure. It allows the system to ask and act upon, “Is this a valid place, motion, or destination for the data?”
Traditionally file access is dictated through role based authentication via directory services, but then what? What if an authorized user is the malicious actor…the data won’t protect itself from those that have the authority to access it. Ultimately, our goal is to have a baseline behavior or digital fingerprint of end devices to detect and alert if anything is acting out of the norm. For example, automated triggers via machine learning can record and even alert that Michael from marketing in the past only accessed several files per day but for reasons unknown is now pulling down 50% more data than usual. This aspect to DLP is critical because there might not be any security policy that states or limits how much data Michael can access on authorized file stores, but this behavior could be an indicator of an insider threat before the breach ever happens.
The Global Risks Report 2016 from the World Economic Forum states that a significant portion of cybercrime goes undetected, particularly in the case where the attacker is trying to exploit an organization’s proprietary secrets or highly confidential documents. This is mainly because illicit access to confidential data can be very hard to detect. In recent news, an engineer named Anthony Levandowski was accused of the exportation of more than 14,000 files from Google and providing them to his new employer. With each incident security professionals should be asking poignant questions. Is this a case of stolen credentials, malicious behavior or malware, or just plain ignorance with the sensitivity of the data being accessed? How do you make a determination if data moving around the network is being accessed in accordance with the organization’s security policy and intended use, and not falling into the hands of vengeful employees or malevolent hackers? Ill-intentions or simply an act of inadvertent handling? We have all seen the initial power of machine learning through IBM’s Watson participation on Jeopardy. But what if we could harness the same power to protect our data?
A DLP solution can be broken down into three pillars:
- Intelligent Classification of Data
- Dynamic Policy Control
- Intuitive Monitoring
Intelligent Classification of Data – Prevention of data loss starts with classifying and marking sensitive data. Three things should be considered when classifying data: how sensitive is the data, how do we want the data accessed, and when and how is this data allowed to be exported. This provides a risk based foundation by which to apply policy control and monitoring.
Dynamic Policy Enforcement – Network segmentation and access control methods, while still relevant, fall short and need to be augmented with more sophisticated tool sets. Scanning data stores and intelligently classifying the data provide the fundamental components for enforcing your organization’s data export policy.
While email is a huge culprit of data exportation, we must also consider instant messaging, printing, screen shots, zipping, and encrypting files for transport that can farther complicate the issue of data exportation, as well as securing cloud based services with the same diligence and disciplines. Controls and automation must be put in place that can not only recognize but dynamically prevent data and confidential intellectual property from leaving the organization.
Intuitive Monitoring – Policy control in and of itself is a great start, but alone is not sufficient. Even the greatest controls and policies could be circumvented so the possibility of an insider threat must be considered. Our cyber security strategy is to strengthen the monitoring capabilities with the help of machine learning to reveal patterns in typical data access and movement, giving security professionals a heightened understanding of user behavior.
These three pillars combined help us address the overall challenge: scaling the data aspects of cyber security along with the growth of IT. As the complexity of the network expands so does the problem of security teams operating with limited resources and talent. Implementing the right tools to automate tasks can reduce the burden on the IT staff, reduce time to detection, and eliminate the insider threat. With machine learning we can not only detect a data breach, we can also prevent it.
We understand that it is easier to read about a prevention solution than it is to implement it, especially when embracing a new approach or opportunity. At Red River we are currently working with our clients to develop holistic DLP solutions that are effective and easy to use. Our goal is to help our clients protect their most sensitive data, assisting them in the adoption of scalable cyber security solutions, and proactively assist our federal customers with achieving compliance with Insider Threat Executive Order 13587, NIST SP 800-53, NIST SP 800-137, and NIST SP 800-171. With a well-developed strategy and properly implemented DLP solution you can have visibility and controls in place to protect your organization and interrupt the attack continuum.
About the Author: Rick Friend is a design engineer with a focus on digital security solutions, software defined networking, and next generation cyber technologies. When not assisting customers to solve business problems with technology he enjoys designing saltwater marine aquariums and writing and producing music.