Depending on who you ask, you’ll get differing definitions of data leakage. On the surface, many would assume that data leakage has to do with the loss of sensitive data, and in one sense, it does. However, if you ask a computer scientist interested in Machine Learning and analytics, you will likely get a different take on data leakage.
What type of data leakage should your organization be concerned with? Of course, your business should avoid data leakage in all forms. However, how can your organization craft data leakage prevention policies if you don’t know how data leakage occurs?
This post will explain the two contexts of data leakage so your organization can work to minimize data leakage in all of its forms.
The Two Types of Data Leakage
Data leakage occurs in two contexts, security, and predictive algorithms. In one context, data leakage refers to sensitive data getting out; in the other, it refers to data getting in. Data leakage in any context is bad, and your organization should create data leakage prevention policies to protect itself. Let’s take a closer look at data leakage in each context to clarify the topic.
Data Leakage: A Security Concern
In the context of organizational security, data leakage is the unauthorized transfer of data from within the organization to an outside source. While the news is full of stories of high-profile digital security breaches, data leakage can also refer to physical data.
Typical data leakage threats occur online and in email, but they can occur anywhere data is accessible, from mobile phones, laptops, USB keys, etc. As a result, data leakage is one of the primary concerns data security specialists deal with. The damage done by a data leak can be immense for organizations, from lawsuits and financial penalties to significant reputational damage and loss of consumer trust.
Organizations have to understand that data leakage threats are not only external. There are also internal threats, both accidental and malicious. Accidental data leaks most commonly occur when employees send emails with confidential information to the wrong recipients.
While accidental data leaks are the most common type of internal leak, unfortunately, accidents don’t mitigate legal responsibility, so your business could still face penalties and reputational damage.
Malicious internal data leaks are carried out by disgruntled employees. They could include emailing and electronically transferring data to outside parties, but they could also include taking photos of sensitive information, making copies, loading up a USB with data and carrying it out of the building, and more.
In addition to internal threats, there are external threats, such as phishing attacks, malware, etc., that cybercriminals use to gain access to your organization’s sensitive data. The digital threat landscape is constantly evolving, so your organization must actively protect its digital assets and data.
In addition to robust internal and external data security measures, your organization must also invest in employee training to help educate them on common attacks and ways to prevent accidental data leakage.
Data Leakage: A Machine Learning Problem
In the context of Machine Learning, data leakage occurs when data points from outside the training dataset are used to create the learning model. The issue with data leakage in Machine Learning is that it can invalidate the model and lead to unpredictable results.
The most common type of Machine Learning data leakage occurs when test data is included in the training data for the learning model. Why is this a problem? When test data leaks into training data, your organization gets a Machine Learning algorithm that runs at unrealistic levels.
You’ll get incredibly accurate and high-performing results because the model has already seen the testing data in some capacity during training. However, the purpose of testing data is to simulate real-world conditions which would be completely unknown to the model.
In test scenarios, this flawed model will look really good, but when it comes time to employ this Machine Learning model in analytic settings, it will produce flawed results or even be utterly useless to your organization. In many ways, it is better to realize the model is entirely useless than act on flawed analysis.
The surest indicator that you have a data leakage problem is overperformance. For example, if you have a Machine Learning algorithm that is built to help you invest in the stock market and it performs with an incredible degree of accuracy in tests, data leakage has likely occurred. If it seems too good to be true, it likely is.
Combating this type of data leakage might seem like a simple task. Just withhold testing data from training data, and while this seems like a simple solution on the surface, in reality, it can be challenging to do this when dealing with complex data sets.
There are some ways organizations can combat data leakage. One of the simplest and most effective methods is to create a validation dataset from your training data and withhold this data until you are ready to perform one final check of your algorithm’s performance.
Another simple action organizations take to combat data leakage is to add “noise” to their training data. By adding random input data or “noise” to the training data, you can smooth out some of the effects of data leakage.
Organizations can also remove the variables they consider leaky, but unless you have firm evidence that a variable is leaking, removing it could hamper the performance of your learning model in the opposite way data leakage does.
Unfortunately, not many resources are available that discuss data leakage from the perspective of Machine Learning models. To combat this form of data leakage, you must pay close attention to your test results and tightly control testing data.
The definition of data leakage will depend on what context it is used in. In one case, it refers to data getting out; in another, it refers to data infiltration. However, no matter what context it is used in, data leakage is not something that your organization will want to allow.
If you want to learn more about the measures your organization can take to combat data leakage, reach out to an experienced app development partner.