MINDWORKS

Mini: Your Human Digital Twin: Coming Soon to a Device Near You

Daniel Serfaty Season 5

What happens when AI can mirror your thinking, preferences, and decisions? In this MINDWORKS Mini, Daniel Serfaty explores the coming era of human digital twins with Dr. Svitlana Volkova and Robert McCormack. Together they look ahead to AI that not only automates tasks but co-evolves with us—supporting discovery, anticipating choices, and poised to redefine how we work, live, and team with machines.

Daniel Serfaty: I'm going to ask you now about the future. Let's talk about the next year. When I say future, the future in AI is usually a few weeks, perhaps a few hours sometimes. But let's take a chance here and look at one year and maybe if we are really brave, look at five years when we look at the future. My first question is you can share with us some insight about the trends that you see regarding the future of AI technology this year, in five years perhaps? Give us a sense of where you see things moving. You mentioned a couple of things about compound AI and agentic AI earlier. You may want to expand on that and maybe multimedia AI. Let us know where we're going to go from a technology perspective and then we'll talk more about the uses of it later.

Svitlana Volkova: In the near future, I think we'll see more and more implications of agentic workflows in the enterprise. So basically automating this document intelligence and all of the company-wide workflows with knowledge extraction, knowledge retrieval, synthesis, summarization. This is already happening, but we'll see it more and more embedded in our everyday lives. In terms of five years, I would put two bets. The first one, autonomous scientific discovery systems. I think the ability to create agents that can summarize scientific literature, analyze and synthesize data from a genomic databases and other types of multimodal data, and generate insights really rapidly will be the key. So think about the agentic workflows for the AI for science, and the second one, human digital twins. So I think right now with LLMs that are pretty good pretenders of human intelligence, we are able to use them and manipulate their attributes like personality, and reliability, competency to model human digital twins. And that will be another in the next probably a couple of years breakthrough to make sure that we are using this human digital twins to automate research workflows, or information intelligence, analyst workflows, and so on.

Daniel Serfaty: Just a clarification before we give the microphone to Bob to give his own predictions or his own bets, as you said, what is a human digital twin?

Svitlana Volkova: Very good question. So first of all, the community is still converging what we mean by the human digital twin. But I can give you a definition of the machine digital twin that was pretty widely used for plants, and nuclear plants, and so on. So it's a system that interacts with its real world counterpart. And it is informed by the real-world data. And then the digital twin creates its own data, acts in the simulated environment, and then further informs and improves the real world counterpart. And there is this constant feedback loop. So think about a digital representation of the real world that is connected to the real world and makes the real world better. And it goes both ways. So that's the system digital twin. For the human digital twin, it's a much more difficult definition, first. And second, we still don't have a good representation of a human digital twin. It's very hard. LLMs cannot model human cognition.

Daniel Serfaty: So you can imagine that in a few years if your predictions are in the right direction, that we will have a digital twin chief of AI at Aptima that will be modeled after you. And with whom you will interact almost that will simulate what you're doing and continue to evolve, to co-evolve with you.

Svitlana Volkova: Exactly.

Daniel Serfaty: And therefore, help you do your work.

Svitlana Volkova: Exactly. Think about in marketing and social media, we already see fake personas for people that are acting on their behalf and posting on social media. This is already there. I believe there are even companies that provide services to create your digital representation. It's more difficult to do it properly than like you mentioned the chief of AI for APTIMA, it's way more difficult. But already there are really cool papers from Stanford in the biomedical domain that are this visionary systems that can model. Think about the research project that we have. We have the PI, we have the biologist, we have the data scientist, the human factors engineer. And think about all of these different competencies that we can model using LLMs and put it all together. But I think what Bob mentioned that LLMs is not going to be enough. That's a compound system and it's much more than just like an LLM, which can only process text. I would refer to Bob's previous comment, and I think it's all about putting it all together in the compound system.

Daniel Serfaty: Seem to be dreaming about the future from where I sit. So tell us a little more about how you see that short term and longer term future.

Robert (Bob) McCormack: So short term, I think we're going to see AI integrated a lot more into our everyday lives. Most people have smartphones in their pockets. Most people can't live or operate in their personal, professional lives without their smartphones. I don't think there's going to be any one event, but I think we're going to see a lot of new features rolling out to our Apple phones, to our Android phones that are going to make our lives a lot easier and more efficient. So building off Svitlana's remarks on human digital twins, what we're going to start seeing is on our devices, we're going to see human digital twins of ourselves that understand our preferences, that understand our intent. And so for example, if I want to book a plane ticket now, I have to go on a lot of websites. I have to compare prices. I have to think about, "Oh, when do I want to leave? When do I want to arrive." And really spend a good amount of time looking for the right flight for me that I know is exactly what I want.

But I think what we're going to see is our phones. There's going to be AI agents that understand what I want either by asking me or just learning over time. And it's going to go find the cheap flight that guarantees that I don't have a middle seat and that I arrive by 6:00 P.M so I can get dinner at my destination and all the things that are important to me when I'm doing that. And we're going to see that in everything. And those features are going to kind of just arrive on our phones whether we like it or not. In the longer term, I think some of the big evolutions are going to come with physical embodiment of AI. So the move of AI from the virtual world on our desktops on our phones into embodied intelligence operating on physical objects in the real world.

So there's a lot of very cool research going on at companies like Boston Dynamics now that are taking large language models, integrating them into their robots and really giving them the ability to interact with physical objects to make decisions on their own and to really gather an understanding of the physical world that humans operate in all the time. And I think this is really important for the next evolution of AI. One of the important concepts of living in a physical world is there are consequences to our actions and things that can hurt us, things that can help us, things that can let us grow, things that can hold us back. And as AI interacts with the physical world, it'll really start to understand some of those consequences of its actions and really be able to make hopefully better ethical decisions in the future.

Daniel Serfaty: Well, let's keep that ethical comment on the background because I want to go back to it, but I'm trying to stitch together two ideas that you shared earlier and Svitlana your comment now about human digital twins. And I think in a sense it goes in the same direction because we know that for two people, let alone a larger team to work well in the human space, in the human teams, yes, you need to communicate with each other. And that's basically the level we are at right now with the prompts and the answer to the prompts. We learn to communicate in a language that looks like human language, but it's not exactly the same. But we need to have basic requirements for a team to operate well. We know from the science of teams that you need to have a shared mental model and a component of that shared mental model is a mental model of the environment, the space, but also the mutual mental model.

In a sense, a model that kind of predicts and anticipates your behavior and the behavior of your teammate so that you can be the best teammate you can be. You do the same with I'm sure your spouses, you do the same with your children. All human interaction or interaction between two intelligent beings is based on that concept. And so, this notion of a human digital twin is interesting because it implies basically that you'll be able to construct something that has the mental model of you. I wonder how much on the human side we have to do to have an accurate mental model of the human twins that's going to look like us. So this notion of human-AI symbiosis is interesting to explore in the future. I'm bringing this notion that you said in the beginning about in the sense the human dimension and now the digital twins technology and they meet at that level to have a more harmonious, if not symbiotic, but at least harmonious relationship with the AI devices. Whether they are virtual or physical, by the way, in our environment, especially the one that start to look like us.