Institutionalizing the Concept of Privacy: Global Convergence and Complexity in the Digital Age

What I'm going to do, I hope, is to describe to you, first, what is privacy? At the risk of telling you things you already know, I'm going to try to set a basic foundation. Second, I'm going to talk about how this concept of privacy and a particular subcategory of privacy has been institutionalized globally over the past 20 years and at an accelerating rate within the last few years as we see internationally increased attention to privacy and data governance. Third, I'll talk about what are some of the challenges and how do we move forward, and what are some of the trends that I see, again, from a legal perspective.

Understanding Privacy

In thinking about privacy, there are two tensions that we face in developing the law. One is the tension between comprehensiveness (that is, having a single set of rules that can apply uniformly across the economy) versus the sector-specific approach. After all, healthcare is in some ways different from banking. Online platforms are different from educational institutions. So there is this tension between comprehensiveness and the sector-specific rules that, I believe, are necessary.

The second tension is between clarity and flexibility. Businesses on the one hand, obviously, want certainty, they want clarity, but on the other hand, they don't want handcuffs. They don't want so much clarity that their flexibility and their ability to innovate are limited. Again, that's an unavoidable tension; clarity versus flexibility.

It also seems to me that there are three guiding objectives to any privacy policy. One, of course, is consumer protection, the protection of the individual. The second is the goal of facilitating innovation, particularly with all the potential that artificial intelligence has to offer. The third is to achieve global interoperability. Interoperability does not mean uniformity, but clearly, we are in an era of a globalized economic activity. Therefore, in privacy law, we're trying to achieve global interoperability.

First of all, what is privacy? The word privacy is like an umbrella; it covers many concepts and many values. You think of personal dignity, the dignity of the individual. You think about the sanctity of the home and of family life, you think about the right to be left alone, the ability to withdraw from society. All the issues around who we are, how we define ourselves, our autonomy, our identity, the right to make choices about your life, intimacy, sexuality, reproduction, the family; these are sometimes issues that are referred to, at least in the United States' legal system, under the umbrella of privacy.

Privacy is also, to some extent, about confidentiality, about secrecy, trade secrecy, but also personal secrecy. The confidentiality, for example, of our communications, whether it's paper mail going back to the 19th century, or whether it's a text or other digital communications. Privacy also has elements to it of anonymity; the right to be unknown in certain contexts. Then finally, there is this concept of control over information about yourself. That's really the one, of all of these concepts of privacy that we're talking about today.

Now, control is a misleading word in many ways. As we'll talk about, at some level, the individual doesn't have control, but the individual does have an interest in how the information is utilized.

Also, of course, you can think of privacy between citizens and the government and privacy in the corporate sphere, privacy between consumers and corporations.

Jean Tirole has been correct that there is a relationship between these two, in that the information collected in the corporate context, in the consumer context, may be used by the government for social control. By and large, we're going to be focusing in this conference on the question of privacy between the consumer and the corporation.

What are the sources or drivers of privacy law? There's human rights law, there's consumer protection. There's the need to generate trust in e-commerce, which is critical. One of the major drivers around the world strong privacy laws is that you need consumer trust to promote adoption and use of these services. There may be others as well. There's also the potential of privacy as a competitive distinguisher. Certainly, Apple has made privacy part of its corporate persona and has tried to distinguish itself, of course, from Google and Facebook and other companies by saying that, "At Apple, we do not monetize your information. We do not sell your data. We sell good products, we don't sell advertising."

Information privacy, as I said, is not primarily about secrecy or confidentiality. It's focused, instead, on what is fair in the collection and use of information. Information privacy primarily deals with information that is, in fact, disclosed; information that is generated in the course of a consumer transaction or information that's generated in the course of interaction with the government, such as through the education system, the health system, or the payment of taxes. This is information which is necessarily disclosed.

You have to give up that information in order to use the service or to participate in society, but still, even though you have surrendered the information to a third party, either to the commercial entity or to the government, we still say you have a privacy interest in it. It comes down to this question of fairness in how this information is being used about you. This kind of information privacy is also known as data protection or consumer privacy.

In Europe, in fact, there's an explicit distinction between privacy and data protection., embedded in the European Charter of Fundamental Rights. Under the EU Charter, “privacy” refers to the confidentiality of communications and the sanctity of the home and family life versus what Europe refers to as data protection, which focuses on fairness. Data must be processed fairly and you see embedded in the concepts that we will talk about more--consent and access and the right to correct information about yourself.

Two more preliminary points. First of all, privacy is context dependent. Obviously, in the workplace you're subject to monitoring by your employer. When you enter into the workplace, you know that your behavior was going to be monitored. In other contexts, you have higher expectations of privacy.

In the medical context, for example, you have to give your medical information to the doctor and the doctor gives it to the nurse and it's conveyed to the pharmacy when you go to get the prescription. Your health information is in many, many places. Somebody once said that from the time the doctor gives you the prescription to the time you put the pill in your mouth, 10 copies of that prescription are made and sit in different databases. It's been disclosed and yet, still, you think that your medical information is highly personal. You would not want it disclosed to your colleagues. You might not even want members of your family to know that you have a particular illness. This is the idea of context. What we will disclose to the doctor we clearly would not want publicly disclosed.

Second, as a preliminary point, privacy is not an absolute right. You see this in the GDPR. In fact, the title of the GDP (the European General Data Protection Regulation) refers to both the protection of natural persons and the free movement of data. Right there in the title of the GDPR, you have the two competing interests, neither of which is absolute. Privacy defenders from the New Zealand Privacy Commissioner to the UN office of High Commissioner, and landmark court decisions from the Puttaswamy decision of the Supreme Court of India to the Roe vs Wade decision in the US, have all said that privacy is not an absolute right.

Fair Information Practices

We're going to be focusing on information privacy, based on the concept of this concept of fair information practices. What is fair in the collection, storage, use, and disclosure of information?

Let's give a brief history. Many of you are steeped in this, some of you may have been involved in developing it, enforcing it, et cetera. It began in the 1970s when the concern about computers arose and data banking became an issue. It was the dawn of the computer age. Simultaneously in Europe and the US, the modern concept of information privacy began to develop. You see it in a German privacy law that was adopted in the state of Hesse in 1970, in the Fair Credit Reporting Act in the US, in a report out of the UK, in another report out of the United States, and in further legislation in the United States and in Germany.

A major watershed occurred in 1980 when simultaneously the Council of Europe developed its convention on the protection of individuals with regard to the automated processing of data and the he OECD (the Organization for Economic Cooperation and Development) adopted guidelines on the protection of privacy and transborder data flows. Again, even in the title of the OECD guidelines, you can see this tension between privacy versus data flows.

The OECD, in 1980, adopted eight principles which have been dominant globally as the essence of information privacy now for 40 years.

•      Collection limit, that when an entity is collecting information, it should only collect as much as is necessary for the transaction.

•      Data quality, the data should be accurate, not perfectly accurate but accurate enough for the purposes at hand.

•      Purpose specification, when an entity is collecting data, it must specify why it is collecting the data.

•      Use limitation, meaning that the entity should only use the data for the specified purpose.

•      Security safeguards, this is obviously a principle that has grown in importance. If you collect personal data, if you hold it, you are responsible for protecting it. You are responsible for protecting the confidentiality, integrity and availability of that data.

•      Openness, you must be transparent.

•      Individual participation, which is where you sometimes get this concept of control, that the individual must have some ability to control. At the very least, individuals must have the ability, for example, to access the data held about them so that they can exercise their other rights, including their right to data quality, their right to know where the data is being disclosed, and who else is getting my data.

•      Finally, accountability, whatever set of rules you have, you must have a mechanism for enforcing them to be sure that you're living by them.

Sometimes different words are applied to these principles. Sometimes people talk about transparency instead of openness. Individual participation is sometimes talked about as control or choice. Sometimes people break these concepts out into nine principles. Sometimes they break them out into 10. Sometimes they boil them down to six.

But my basic point here is that these Fair Information Practices have been globally influential. The US Department of Homeland Security adopted them in 2008. They're reflected in the EU data protection directive of 1995. The Federal Trade Commission in the United States has more or less embraced them. They are reflected in the General Data Protection Regulation (GDPR) in Europe.

They began to be institutionalized in the 1990s, and for roughly a 25-year period between the 1990s and 2015, you saw them, particularly in Europe, taking hold through the Data Protection Directive. They were never comprehensively adopted in the United States. Instead, the US has taken the sectoral approach. If you look at our medical privacy law, our educational records privacy law, our communications privacy law, our financial records law, you see elements of these principles.

They've been adopted, to some extent, globally. Certainly through 2015, there was a movement in that direction. Israel, Argentina, New Zealand and other countries have adopted laws based upon these concepts. The APEC (Asia-Pacific Economic Cooperation) organization's privacy framework, which is a non-regulatory framework, still is based upon these same concepts. It reflects the same concepts in an effort to make them, in a way, more business friendly or to make them even more flexible, less binding, less regulatory, but still the same concepts were used.

Simultaneously, there's been a lot of questioning about these principles; a lot of criticism of them. FTC chairman Timothy Muris in 2001, talking about the financial services privacy law, complained about all the trees that had been killed to produce all the bank privacy notices that are printed on paper and are delivered by mail once a year and nobody reads them.

Professor Fred Cate has criticized the FIPs as being unsuccessful. One scholar wrote an article entitled the GDPR: The Emperor's New Clothes, stating that the privacy promised by the GDPR is an illusion. Danny Weitzner, a former colleague of mine, when he was in the US Department of Commerce, said that nobody any longer defends notice and choice, which had become the sole focus of privacy debate in the U.S..

There have been, over time, various efforts to say we need a new concept of privacy. We go through these cycles, academically at least, and to some extent with policy makers, where some commentators claim that we need a new definition of privacy for the era of big data, or a new definition of privacy for the Internet age, or a new definition of privacy for artificial intelligence. However, if you look at these supposedly new privacy concepts, they come back to the same concepts and the same words used in the FIPs.

For example, Professor Ira Rubinstein laid out a new concept that he called “personal data services.” At the end of the paper he noted that  this new approach was actually very similar the core FIPs. Professor Paul Schwartz at Berkeley argued for the need to emphasize accountability and information quality, which are part of the FIPs. Different scholars may emphasize different principles, but they keep coming back to the FIPs again and again.

Since 2016, just in the last two and a half years, there's been a remarkable, dramatic series of developments globally, which to me represent a final phase of the institutionalization of the FIPs. The FIPs-based privacy framework is not a perfect system. It's a highly imperfect system, and yet around the world, this concept of information privacy is gaining acceptance. You see this particularly in the General Data Protection Regulation, which made this framework mandatory throughout the European Union, and has had huge effect in the United States. Indeed, the number one privacy law in the United States over the past three years has been the General Data Protection Regulation, as US companies doing business in Europe have all moved to comply with the GDPR, at least for data they collect in Europe.

In China, in the cybersecurity law that was adopted in 2016 and took effect in 2017, if you take a look at articles 41 and 43 for the regulated infrastructure operators, all of the FIPs are there. Purpose specification, collection limit, security, use limit are all in articles 41 and 43 of the cybersecurity law. In India, in 2017, the Supreme Court issued a major decision recognizing privacy as a fundamental right under the Indian Constitution, making it clear that it was both a negative or protective right as well as a positive right, and saying it was incumbent upon the Government of India to adopt a comprehensive privacy law. Such a law is now in draft form and is attracting controversy. There's no doubt that India will move forward to adopt some kind of comprehensive privacy law.

In the United States, there has really been very little federal action by Congress on data privacy and even on data security since the 1990s, but we are now seeing increasing activism by the states, including adoption of the California Consumer Privacy Act.

Also in China, the government issued the Personal Information Security Specification in 2018 trying to spell out, in a non-binding way so far, guidance on what consent means. Obviously, many of you are more familiar with that. Brazil adopted its data protection law last year. Just last month, the EU declared that Japan had achieved adequacy, which means that Japan's data privacy legislation is basically equivalent or provides equivalent protection to that of the GDPR in EU. A Korea adequacy determination may come soon. This remarkable and almost accelerating pace of global privacy change all based upon the same concept of privacy.

Data Governance Mechanism

Even as we are seeing global convergence on the broad outlines of what information privacy means, we see increasing complexity, creating more difficult challenges. Key aspects of data governance need to be strengthened.

One critical governance mechanism is enforcement. We have this concept of what is privacy. We use these words: consent, access, right to be forgotten, right to correct, and use limitation. What do they really mean? How do we reconcile that tension between clarity and flexibility? It comes down to the enforcement agencies. Any privacy protection structure needs enforcement.

In the U.S., up through 2016, the Federal Trade Commission had become increasingly active in privacy enforcement. With the Trump administration and its anti-regulatory philosophy, it is a little unclear where the Commission will go now, but before the change in administrations, the Commission was very strong in going after companies like Facebook. In one case, for example, it obtained a consent decrees ordering Facebook, prior to sharing user information, to clearly and prominently disclose what data was being collected (that's the transparency principle from the FIPs),and obtain express affirmative consent (there’s o the concept of user control or user participation).

Facebook is now under investigation for violating this order in the special agreements that it entered with some app developers and some services for sharing information without specific consumer consent.

Just a month and a half ago, the CNIL, the French data privacy regulator, imposed a major fine of €50 million. Some might say, "Well, for Google that's just a five-minute worth of revenue," but 50 million is still, as we say, real money. It shows the intent of the European regulators to say, "The GDPR is real and we are going to be enforcing it." This was a case where Google was found not to have been sufficiently clear in its privacy policy. It failed the transparency principle. It failed the openness or notice principle. The CNIL came down, in this case, relatively hard. Google will appeal, we’ll see what will happen. There are other cases pending.

The second governance challenge we face is making notice and consent meaningful. Notice and consent have been heavily criticized; we all click “I accept,” we don't read the notices, they're too complicated, too many trees have been killed in the paper world, too many pages to scroll through online, it would take you many years to read all the privacy policies associated with all the services you use. All that's true, but I do think that notice and choice remain core elements of the privacy framework. Moreover, some good progress has been made in delivery of notice and choice.

For example, when you visit a website site that wants to collect your location, the major browsers now ask you if you want to share your location information. Right then and there, as the data is being collected, you are asked for your permission. That was a coding decision by the browser makers who decided they would not allow the websites that you visit to grab that location information from your operating system without your express permission. Previously, of course, the information could be automatically pulled without notice to the consumer. This was not a legal mandate, but a response to public pressure. The browser makers decided to increase user control, partly because they were trying to compete for users and using privacy as a selling point.

Huge progress has also been made in the mobile world, at least in the United States, where the operating systems now require affirmative user action to disclose location information. Apps used to pull location and buddy lists or contact lists automatically from the operating systems. Apple, through its operating system, said, "Now we're not going to allow location or other information to go from the operating system to the third-party app developer without express permission of the device owner."

In other cases, notice and consent don’t work all that well. In the past year, you've all seen cookie banners as you've visited at least western websites, advising you that the sites collect data from you. Some of these notices explain specifically why they're collecting cookie data but others don’t have enough detail and don’t explain to you that by continuing to use the website you're actually giving consent. Regulators will have to address this and provide guidance, either in case-by-case enforcement, as the CNIL did with its fine against Google, or through guidelines.

A third trend that we're seeing in governance is an expanded concept of what is actually protected data. 20 years ago, there was a lot of debate about what protected information is or what PII (personally identifiable information)is. A lot of the privacy laws had specific lists defining what was protected and if it wasn't on the list it wasn't protected. We've moved away from that. Regulators around the world and laws around the world, including the European GDPR and the California law, basically say that protected information is any information that relates to an individual, if it is capable of being associated with an individual, even by combining it with other information.

These broader definitions of protected information are generally subject to an exception for anonymized data. All, or almost all, of the privacy regimes allow the use of data that has been de-identified if the entity using it promises not to re-identify it. We can talk more about how that works in practice. There has been a lot of debate about how well data has been de-identified. There remains, however, a recognition that the analytic value you can get from anonymized or de-identified data does not harm the individual and does not violate privacy principles.

We also have seen an increasing expansion of data governance mechanisms. In the past, a company would have its privacy policy in which it would set out all its rules and then the regulator would ask, "Did you live up to your privacy policy?" In the GDPR, we see a layering on of additional governance mechanisms. For example, under the GDPR, companies must now have a chief privacy officer or data protection officer.

This is a concept that originated in the United States. It requires somebody inside the business who is responsible for setting and overseeing internal implementation of the data processing rules.

The GDPR also requires privacy impact assessments. This is a concept that originated in Australia. A privacy impact assessment is a systematic process for looking at your data holdings, looking at your new products, your new services and in advance asking, as Jean Tirole said, "What could go wrong with this service? What data are we collecting? How are we going to be using it? How will we mitigate any privacy impact?"

Another important governance mechanism, at least in the United States, is civil lawsuits by consumers. Class actions are now a feature of the GDPR as well, still yet to be implemented but coming in Europe.  The biggest regulator in the world cannot possibly have the resources, the scope, and the capability of monitoring everything. Therefore, effective governance should include the crowdsourced layer of consumer class actions.

The concept of privacy by design, which first took hold in Canada, and which is related to privacy impact assessments, is another governance mechanism reflected in the GDPR.

Also, we are expanding the conceptual toolbox. Maybe privacy law isn't the only way to address the concerns that we have around data, particularly in this era of the Internet of Things. Maybe we need to bring competition law concepts to bear. We see this, in fact, just last month in a major development. The German Competition Authority issued an order against Facebook’s data practices saying that the company could not pair up the data from WhatsApp and Messenger, the two apps that Facebook had acquired, with other Facebook data without prior consent of users. The German competition authority concluded that combining the data was anti-competitive behavior on the part of Facebook to exploit its dominant position in this way. We also see in the United States, and I predict elsewhere, civil rights laws being used to address data usage.

Artificial Fairness?

In the last two minutes or so, let me talk about AI and specifically about, how can we take this concept of fairness and apply it to artificial intelligence. In the United States, we have a problem with racial disparities in our criminal justice system. We know that black defendants are sentenced to longer sentences on average than white defendants are for committing the same crime. We've tried to eliminate this. The judiciary is committed to eliminating this, but it's not easy because of the implicit biases that we carry with us.

The judiciary in the United States has turned to AI to take the human out of the picture and just look at the data. Surely, the theory goes, if we only look at the data, we can eliminate the disparity. Well, a group called ProPublica looked at some of the AI and found that it actually replicates the human bias. Race is not one of the data points and yet still blacks are getting longer sentences than whites for the same crimes. Why is this?

Some of the AI developers argue, "Well, we don't know. This is machine learning, we give it the data and it processes the data. We cannot go back and figure out why and we cannot explain why it came up with the decision." How can that be fair? How is that transparent? We are now developing a series of concepts that will apply the fairness principles to AI. The GDPR begins to touch upon this by saying that a data controller must be able to explain or at least provide meaningful information about the logic involved in the processing of data. Researchers at several companies are developing what they call explainable AI. IBM researchers have developed something that they call the Supplier's Declaration of Conformity. This is intended to at least say what data was used in the training. Others have advanced the idea of bias bounties.

For my final point I want to return to the Fair Information Practices, using the eight principles defined by the OECD in 1980. This, in my opinion, is the best we have. If we're talking about the collection and use of information, these are the relevant concepts. If you were at Ant Financial or other components of Alibaba and your team were sent to the Jack Ma apartment in the gardens to get the innovative juices flowing, what should be the questions that the team would ask about data as they sit around that table brainstorming and coming up with a brilliant product? These are the questions based on the FIPs that any product developer should be asking:

What data are we collecting? Why are we collecting it? Are we collecting too much? What are we going to explain to the consumer? How are we going to disclose this to them? How long will we retain the data? Are we going to keep it forever or will we only keep it for a period of time to generate the value from it? Will we be disclosing it to third parties? Will we be protecting the security of it? How will we protect the security of it?

 

(The author is the executive director of the Berkeley Center for Law & Technology. He spoke at the Conference on Privacy and Data Governance organized by Luohan Academy on March 19-20, 2019, Hangzhou )

Related Insights

00:00:0000:00:00