The boundary of privacy protection and the evolution of social norms
“If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” Indeed, as implied by the logic of Google’s ex-CEO Eric Schmidt, social visibility is a powerful incentive to regulate behaviors with significant impact, positive and negative. When people know that others will learn of their actions, they contribute more to the public good and are less likely to cheat, pollute, waste, or engage in other asocial behaviors.
Although we know that social visibility is a powerful incentive that improves behavior, the protection of personal data has been an increasingly popular topic in the age of big data. By defining “privacy” as “how much citizens know about each other’s behaviors," S. Nageeb Ali (Pennsylvania State University) and Roland Benabou (Princeton University) studied the costs of social transparency arising from evolving social norms and the required adaption of formal institutions. Put simply; their research revolves around how institutions need to change to changing privacy concerns.
Their theoretical model factors in social norms, social learning, and preferences to show how both society and individuals are evolving. Issues such as overt racism, sexism, and domestic violence went from “normal” to deeply scorned within a decade or two, while divorce, cohabitation, and homosexuality shifted from intensely stigmatized to broadly acceptable. The changing social norms show that people learn from each other and have far-reaching implications from legislators and courts to companies and communities.
Rewards or sanctions from publicizing individual behavior stem from the reactions of family, peers, neighbors; customers are a mechanism that scientists and economists call social image. While many might believe that privacy is better for the overall social good, by reducing social transparency, privacy protection incurs costs due to fewer incentives to follow the social norms and less learning from each other’s changing values. For example, harmful behaviors are often kept secret and are usually under a veil of anonymity. Therefore, individuals' or groups' behavior isn't exposed to society. The best example of this is the "Me Too" movement. Here, predatory behavior that might have been swept under the rug is otherwise exposed to changing overarching social norms and furthering women’s rights.
However, information isn’t perfect when factoring in the bad from the good based on one’s social image, which distorts the way people behave and communicate with each other. People hide their changing values because of the stigma placed on it by society. The social norm is correspondingly misled by these distortions, which becomes amplified as individuals’ actions are made more visible.
The optimal privacy protection level, therefore, depends on whether social norms are in a transition period. During a transition of values and social norms, people start questioning their judgment. During these times it is better to allow higher privacy because people can adapt their behavior according to their true values and feelings. Lower privacy makes people worried about social stigmas and they will distort their actions and these shifts will remain obscured, resulting in the rigidification and mal-adaptation of private conduct, and public policy. On the other hand, when the values are relatively stable, i.e., right and wrong are clearer and more publicity is preferred to monitor and punish misconduct.
From the perspective of social norm evolution, as discussed in the research, there is always a boundary for privacy protection. A higher level of privacy protection is not always a better choice, particularly when social norms remain stable.