Data Pseudonymization is one method of data protection. It protects the rights of individual data subjects. And, give organisations some flexibility to balance those privacy rights against legitimate business goals.
The reasoning behind the GDPR audit is to protect the inherent, explicit and implied rights of data subjects within the constitution of Europe, in that its individuals have a fundamental right to privacy. Data Pseudonymization is hard to identify due to the process of replacing identifiable characteristics with artificial identifiers. The purpose is to render the original data unidentifiable.
Why Pseudonymize Data?
This data protection approach is helpful if an organisation is likely to have situations as follows:
- Where third party access is often required to have access to IT equipment for repairs for example, or if employees need to take equipment away from their place of work and there is the related threat of loss or theft of data, or “shoulder surfing” in public spaces.
- Organisations will adopt pseudonymization associated with a management strategy to provide protection for their users or as an internal data policy decreasing the risk of a data breach when sharing data with third-party data processors or external data controllers.
- Where personnel regularly require access to personal data.
- Pseudonymization can also assist in the practice of data housekeeping.
Many high profile organisations already regularly take advantage, such as Google, Apple, and Uber, to enable them to analyze data without the worry of repercussions from regulators. In this respect, pseudonymization is a welcome approach to ensure enhanced protection in respect of security breaches and to provide a higher level of protection of privacy for employees and users.
Is Pseudonymization Risk-Free?
Pseudonymization used alone still carries the potential risk of the breach because the original, identifying data is still present in some form and, as such, may be at risk of being matched by linkage or inference, allowing for identification of the data subject. Here, it is vital to differentiate between pseudonymization and anonymization. Pseudonymization differs from the process of anonymization in that pseudonymization provides enhanced but still limited protection and relies upon other external methods to re-identify the data. Pseudonymization obscures the data so that it is indecipherable, but this obfuscated data can be re-identified when linked with additional data, a bit like the key that unlocks the door.
Pseudonymization data must be kept entirely separate from the linking data that would attribute to an individual’s identity. Robust organisational measures need to be put in place to prevent any third party linking the data. Conversely, anonymization strips away all personal information, deleting all original data, and is a permanent method of protecting privacy. Organisations use this method usually to collect relevant data for specific purposes, such as testing new systems, analyzing patterns in surveys or any other instance where the data is not required to be tied to a particular individual. In this respect, the information is collected as needed then stripped back and the original data deleted to provide a snapshot of “what” is happening rather than “who” the data refers to.
Pseudonymization is subject to tests to reduce breaches. For pseudonymized data to pass the reasonableness test, it would need the scrutiny of the “motivated intruder” analysis, with a close examination of the likelihood of the stolen data becoming re-identified and, as already mentioned, robust methods would need to be in place to prevent the likelihood of pseudonymized data becoming re-identified.
However, even with the robust re-identifying process, pseudonymization still falls within the remit of personal data and will not be subject to the get out of GDPR provision applicable to that of anonymized data.