.png)
MINDWORKS
Join Aptima CEO, Daniel Serfaty, as he speaks with scientists, technologists, engineers, other practitioners, and thought leaders to explore how AI, data science, and technology is changing how humans think, learn, and work in the Age of AI.
MINDWORKS
Mini: AI and the Privacy Paradox: When Sharing Becomes Prediction
As artificial intelligence grows more capable, it thrives on personal data, yet every piece of information we share deepens the privacy challenge. In this MINDWORKS Mini, Daniel Serfaty talks with Drs. Svitlana Volkova and Robert McCormack about how AI models collect, combine, and infer from our digital lives, even predicting future actions. They explore what this means for personal choice, data ownership, and trust, and offer practical advice for both AI developers and everyday users on staying vigilant while embracing AI’s potential.
Daniel Serfaty: One last question before I'm going to ask you for some closing thoughts. You mentioned privacy, Bob. We're living in this paradox in a sense from your remarks earlier, that the more data we provide to those AI devices, whether it's a human digital twin or an agent, the better they're going to be at what they do. It's an interesting hypothesis. And then the more data we provide, the more intimate that data is about our health, our DNA, as you say, our relationship, the nature of our relationship with our loved ones, for example, the better those AI agents working around us or living amongst us are going to be. Is data privacy going to be relevant even in the future? Because I see the race between these two or I see the data privacy regulation fighting against basically the AI thirst for more data in order to be better. I know this is a question I should ask a lawyer, but I'm asking much more intelligent people here in the two of you. So what do you think about that conflict?
Robert (Bob) McCormack: I think our concept of data privacy will continue to evolve. I think we need to really, again, understand what the impact is going to be. I think there's a lot of confusion now in the world about what happens to our data when we put it out there. One of the consequences of social media is people have been very open or some people have been very open about sharing every detail about their private life, maybe even too many details on social media. The threat hasn't always been about theft of data, but I think without us realizing it, we've been handing over our data for years if not decades. And I don't think we fully understand who controls that data, who owns that data, who's making money off our data?
Any app you use on your phone or your computer that's free, they're probably taking your data and using it to build the next app or to do something whether nefarious or good without your knowledge. So I think there needs to be a continued conversation about how do we view data privacy, what's important to us as humans as a society, to really shape that future in a way that benefits us all?
Svitlana Volkova: I envision more regulations in this space, especially because of the foundation models and generative AI. And I think, for the last two years, people became more educated and more aware of the fact that their data is out there and these models are trained on the data. So every person is making that decision like what to share, how much to share, whether to share. I know lots of people who share nothing, but it's still, I am not one of the people who are sharing much. But if you ask Claude, who is Svitlana Volkova? It can almost accurately generate things about me because there is some things about my professional life on the internet. At the same time, it hallucinates, right? It can miss my university back from Ukraine and it can miss some critical facts from my life. But overall, it can generate pretty good bio for Svitlana Volkova. So we just have to be aware that the data is over there and we can do anything about it. And we have to make a decision what to share next and whether or not to share.
Daniel Serfaty: It's a good example about that. I think there is a generational gap between folks that are now maybe your children, my children, folks that are now just coming out of college right now regarding the importance of privacy, even though they're aware that the data is being used and commercialized by third parties. The question is, it will be interesting in the future, and maybe I'll invite you toward the end of this podcast season to reflect on, you talk about mistakes or hallucinations of AI. But what if AI was making inferences about your behavior, Svitlana. Not your resume, not what you've done in the past, but make inferences about what you're going to do, and publicize that, and make that available to Bob, let's say. Or to any other person that has intentions less clean than that of Bob's. I think that's really the thing, because those ingredients of the digital twin, those ingredients about modeling a person is not just about modeling what that person has done. But it's also modeling what that person is thinking and also modeling what that person is about to do and will do in the future and will behave.
That can be disruptive on many level that we haven't even started to think about. Accessing the less than conscious part of ourselves based upon the observables of our behavior of every day is also something that is intriguing and frightening to many of us. I'd like to conclude with giving you the platform for some advice for two kind of people. I'd like for you to think about some of the more junior people entering the fray or of being workers of AI in a sense, developers of AI, maybe some advice that you will give them as they are entering the field of AI right now. And perhaps some other advice you may have for the casual user of AI. Give me each about a couple of minutes on that and give some guidance and some advice. Who wants to start?
Svitlana Volkova: I can go first. So let's start with the developers. I would say, focus on problems, not techniques. I think fundamentally understanding why I'm developing the AI system, why AI is necessary here, what components of compound AI is needed, is the key, right? Just focus on the problem. The second big advice I would like to give is understand the evaluation. AI evaluation is fundamentally broken right now. We're basically relying on public leaderboards with proxy tasks and we talk about reasoning and other abilities of AI systems that do not allow us to translate capabilities into the real-world applications and make sure that we are developing it for the end user to solve the end user tasks. So benchmarking is not going to get us far. How do we transform AI evaluation? Finally, I would say, think more deeply about human-AI symbiosis, like you said, Daniel. What are these human-AI workflows that can make the human shine, and AI shine, and the team shine together? So this three advices for the developers.
In terms of the casual user, I really want to warn the casual user, approach AI as a collaborator, not the oracle. AI makes mistakes. Be aware. Don't trust blindly. And this is, again, advice to me when I plan my everyday tasks, what to delegate to AI versus what to do yourself. So this division between what I can let AI do versus what I reserve for myself and just be an active participant in shaping how AI is used at work and at home, like with your kids versus at your workspace.
Daniel Serfaty: Wonderful. Bob, do you want to add something to this very wise word from Svitlana?
Robert (Bob) McCormack: I don't know if I can say it better, but I totally agree with what Svitlana said. And maybe I'll add, advice for developers is don't be afraid to use AI tools and think about the consequences that AI is going to have on the applications you're developing. Think about the data you're using in your AI and how people are going to use it. And just be thoughtful about when is the right time to use an AI model. When are other maybe more traditional models better, more efficient, less costly?
For the everyday user, my guidance would be to find the right balance of trust and usefulness in AI and healthy skepticism towards it. As Svitlana said, hallucinations are real. Don't always trust it, but don't be afraid to use it as a tool in your life. Just keep in mind, it's not the oracle as Svitlana said, but it's a stepping stone to solving your problems and finding your answers and really think about where the information is coming from. Think about who owns that AI and what they have to gain by you using it. And just be very careful, but be optimistic about it as well.