Thomas Sargent is the winner of the 2011 Nobel Prize in Economics, for his empirical research on cause and effect in the macroeconomy. He is one of the leaders of the "rational expectations revolution" and makes the theory of rational expectations statistically operational.
Professor Sargent shared his understanding of the recent development of differential privacy. He emphasized the great potential of this field and reinterpreted the main issues from an economic perspective.
Okay. Thank you. And thanks, Long, for this invitation.
As I just mentioned, I'm trying to learn from this report and associated things. So here's something that this helped me learn about. There's a literature among data scientists called differential privacy. It's both useful and beautiful. What the game is, is to construct queries that are invariant to removing people from the dataset, and the general idea is to figure out how to do that. And one big tool on that was actually, the economist participated in the origin of this literature is what's called randomized response. And it's to collect data in a way that everybody in the process knows that noise have been added to the data in a way to ensure deniability of the person answering the question.
I'll put a plug for a paper. There's a paper by an economist, Lars Ljungqvist, "A unified approach to measures of privacy in randomized response models: A utilitarian perspective". It's published in JASA 30 years ago. And it's basically a simple mechanism design problem, and that's been embraced. So that literature focuses on trade-offs. The whole game of this thing is to cope with these trade-offs and to calculate, to provide upper bounds on certain kinds of risks.
Another thing that I was drawn to, and that's learned, and that people are using is what is called secure multi-party computation. This remarks what previous people said about ownership. The idea here is to have joint ownership. This uses shared governance and shared ownership. It uses cryptographic math, and you perform computations while data are encrypted. People at DeepMind are using this, there's Python programs that are built on PyTorch. And the whole spirit is to use data and to learn from data that you don't have access to. There's a person named Andrew Trask at DeepMind who is a good teacher about this.
Steve, I'm done. I'm going to relax your constraint. I'm eager to hear Eric.
Well, thank you very much, Tom. You've always been that generous. Before I move on to Eric, I actually have a question for you, Tom. That article you referenced by Ljungqvist, I believe, you mentioned? Is that related to the computer science literature on differential privacy?
So the keyword in differential privacy is called randomized response. That's kind of an early, not the first, people have been thinking about this for a long time. It's a key tool and they're using that. Another remark is, so adding noise into data to ensure deniability. It's a beautiful idea. And it's the opposite of Sims' Rational Inattention. And the math looks very similar. I'll just stop there.
Thank you. And in fact, one of Eric's colleagues at Harvard who is a friend of mine, Cynthia Dwork, she is one of the founding scholars of differential privacy and computer science. And I was aware of that stuff, and I wasn't aware of Ljungqvist's stuff. So it's always excited to see how literatures merge and co-exist.
For more information, please visit Luohan Academy's youtube channel: Luohan Academy