.jpg)
MINDWORKS
Join Aptima CEO, Daniel Serfaty, as he speaks with scientists, technologists, engineers, other practitioners, and thought leaders to explore how AI, data science, and technology is changing how humans think, learn, and work in the Age of AI.
MINDWORKS
AI: The End of the Prologue
Tracing the breakthroughs behind today’s AI, and the human future it’s shaping next
Welcome to season five of MINDWORKS. Over the next 12 months, together, we’re going to explore how artificial intelligence is redefining the many roles we humans play—whether we’re at school, at work, or at play.
In this first episode, I’m joined by my colleagues Dr. Svitlana Volkova and Dr. Robert McCormack to make sense of where we are and where we’re going with AI. We discuss the breakthroughs that brought us here—transformers, reinforcement learning with human feedback—and the frontiers opening now: compound AI, agentic workflows, and reasoning models.
You’ll hear how these advances are changing the way we search, learn, and even design digital twins of ourselves, along with the big questions they raise about bias, safety, privacy, and creativity. And we’ll look ahead to what’s coming soon—AI woven into our phones and enterprises—and what’s just over the horizon with embodied intelligence in the physical world.
For developers, our advice is clear: focus on real problems, rethink how we evaluate AI, and design systems that team with humans. For everyday users: treat AI as a collaborator, not an oracle.
And those headlines about an “AI bubble”? What we’re seeing is not the end of the story, but the end of the prologue.
Daniel Serfaty: The past two years have been banner years for the science, the practice, and the omnipresence of artificial intelligence in our lives. The introduction of ChatGPT and its dissemination on everyone’s phones and laptops has changed everything.
Or has it?
What now? Where are we headed in 2026 and beyond? What can we expect in the longer term as this transformative technology continues to redefine the many roles we humans play—whether we’re at school, at work, or at play?
Welcome to season five of the MINDWORKS Podcast. This is your host, Daniel Serfaty. This season, together, we are going to take a deep dive into artificial intelligence and the resulting transformation of human work across a variety of industries and domains such as finance, art, defense, manufacturing, and healthcare.
Now, before we begin, I want to pause for a moment to address the elephant in the room. If you have glanced at the headlines recently, you’ve probably seen speculation that the so-called “AI bubble” is about to burst. Well, to paraphrase Mark Twain, I believe reports of AI’s demise have been greatly exaggerated. What we’re experiencing is not a collapse, but a recalibration, an adjustment that often follows any period of rapid innovation.
The promise of AI is not fading; it is maturing.
And while we won’t dwell on this topic today, I think it’s important to say at the outset that the real story is not about bubbles bursting, but about how we as a society will shape and harness AI in the years to come.
To help us navigate this path, it is my pleasure to have with me today two experts in the field of AI that I’m also proud to call my colleagues at Aptima.
Dr. Svitlana Volkova is Chief of AI at Aptima Incorporated, making her second appearance here, a very rare occurrence on MINDWORKS.
Svitlana is a recognized leader and speaker in the field of human-centered AI. At Aptima, she leads the development of AI-powered descriptive, predictive, and prescriptive decision intelligence and analytics to model and explain complex systems and behaviors for national security and beyond. She has served as principal investigator and project manager on multiple Department of Defense and Department of Energy–funded projects, and she holds a PhD in computer science from Johns Hopkins University.
I’m also delighted to have with me today, Dr. Robert McCormack, who is the Director of Aptima’s Intelligent Performance Analytics Division. Dr. McCormack (and we call him Bob) is an expert in natural language processing, epidemiological modeling, and human social-cultural analysis. His research and work spans applications of natural language processing for team performance, knowledge discovery, and population-level dynamics. He’s also an expert in the application of epidemiological modeling techniques to the study of how memes spread through social networks. Bob holds a PhD in mathematics from Texas Tech University.
Welcome back, Svitlana and Bob. Welcome to MINDWORKS.
It's important for us to divide our discussion into two part. I want to take stock about what have we accomplished today. AI was not created with the distribution of ChatGPT a couple of years ago, and basically that it became available for free on everybody's platform or almost free. But AI started a long time ago. In fact, a few people know that AI started actually in the fifties and eventually went through a revolution, followed by a winter, followed by a revolution, followed by a winter, and now we have perhaps in the big third wave of AI. This one is different from the previous ones, I think. Because so many people touch it, it's not just a prerogative or the privilege of a few scientists, but it's really becoming democratized almost.
So from your perspective, or if we look more in the short term, was there a seminal event, an idea, an insight, a paper that really predated the explosion of AI or enabled the explosion of AI in so many aspects today? Was there one particular thing that you want to share with our audience that really enabled what most people consider AI today, which is primarily large language models?
Robert (Bob) McCormack: When I think about the latest wave of AI, my mind always goes back to around 2017. There was a paper about transformer models. Transformer models really built off a lot of the work from the previous decades on neural networks and added a lot of new functionality that I think really revolutionized how AI is used today. And a lot of that has to do with the idea of attention. So neural networks are really good about taking large amounts of data and finding patterns in that. But what's really interesting about language is just how easy it is for us humans to speak to each other, to understand each other. Two, three-year-olds can use language and effectively communicate with each other and with their parents and adults. But that's really been hard for computers. With the advent of transformers almost a decade ago, introduced some new concepts into the neural network architecture, one of them being the concept of attention, and that's really important in understanding language.
The notion is that every single word in a sentence has meaning, but that meaning is not independent of the rest of the sentence. The sentence as a whole adds meaning to every individual word. So for example, if I say, she picked up the bat after the baseball game. Your mind immediately goes to a wooden baseball bat or a metal baseball bat. But if I say she picked up the bat at the zoo, your mind immediately goes to a very different meaning of bat, the flying animal. And so, humans intuitively understand and are able to parse that and really place the differences in those meanings of the word bat. But that's traditionally been very difficult for computers to do. And the transformer architecture really provided some mechanisms to understand in context what these words mean, how they're different, and allow computers to really think at a higher level and transform the way information and language is parsed.
Daniel Serfaty: So, some ideas you said for you from that paper, the 2017 paper on transformers, Svitlana, maybe you can bring us back more to the past two years. And see maybe based on that technology, how do you see the big breakthroughs over the past two years, what changed there?
Svitlana Volkova: Absolutely. Bob is absolutely right with the transformer architectures, which is a very efficient architecture that allowed for us to develop LLMs that can learn at scale. If we think about what knowledge is encoded in the LLM, it would be like human learning for 200 years, which is not possible. So if we're thinking about the advancements and this pivotal moments for the last two years, I would say, definitely transformer architectures and the ability to learn at scale from large amount of data. In addition to that, I would say, the RLHF techniques, the reinforcement learning with human feedback. I think I might have mentioned this experiment that OpenAI did when they released ChatGPT to the world. They basically collected all of the human preferences and interactions of how people across the world interact with ChatGPT. To then use these preferences and interactions and improve the model so that the incorporation of the human and the way that actually humans interact with LLMs and vision language models these days. The line is blurred right now between model developers and model users because anyone can use LLM. It's super easy.
So that was the key moment in the last two years. But I also wanted to say in 2012, I remember, I was at the NIPS Conference. That was the time when ImageNet was released. And I remember people talking, people who were skeptics in neural networks, they were talking in the hallways almost religiously that, "Oh, this is the key moment. The radiologists are going to disappear. Then we are going to have computer vision everywhere, like ubiquitous." But see where we are, I don't think radiologists disappear.
Daniel Serfaty: Okay, what is ImageNet, by the way?
Svitlana Volkova: Okay, it's a neural network that allowed us to really understand images. And it was prior to the fusion models, it was the first time even before text embeddings and transformer models that Bob mentioned. That was the ImageNet moment for natural language processing. But for computer vision, I remember every paper we see there is a new architecture. ImageNet, it's a data set developed from Stanford, a large data collection where we could use it to learn this neural network models for computer vision, it was AlexNet and other models that appeared like right now we see LLMs appear. In 2012, we saw this different computer vision architectures just being invented that are more and more efficient.
Daniel Serfaty: That's great. Those are seminal events. I'm so glad you are here because you can put us in terms of the scale of when an idea, there are ideas every hour on AI. But when an idea really have this transformative, no pun intended here, but this transformative effect on the field. How do you think those advancements, whether it's notion of the example you just gave on images, or on transformers, or on the ability to do that extraordinary experiment you just described Svitlana, that's a reason ChatGPT was free because we were part of the design team, the users. They use that interaction in order to improve the tool, which was genius, I think. How are all these advancements being applied in some real-world scenarios? Give us some examples of what you consider today as successful implementation of those ideas. Things that have really changed not just the life of computer scientists but the life of users.
Robert (Bob) McCormack: One of the most salient examples to me is the amount of time I use Google versus the amount of time I'm using something like ChatGPT. Five years ago, if I had a question, I would go to Google, type it in, search through some links and really spend a lot of time curating knowledge to try to find the answer to my question. And try to de-conflict what different people are saying about this subject. Today, a lot of times, I'll go directly to ChatGPT if it's a simple knowledge question and just ask ChatGPT, what led to the fall of the Third Reich in Germany or trying to get factual information summarization. Now the caveat there is you have to be very wary of the information that ChatGPT or other LLMs give you. You need to not just inherently trust the answers, but take it as a starting point to continue research and doing better understanding. But in terms of really searching and organizing knowledge, I think that's where I've seen a huge change in impact on everyday life.
Svitlana Volkova: That was absolutely right. I never click on the links on Google anymore. If I Google, I look into the Gemini model output that is on top of the page and that gives you a succinct answer. And this is for Google [inaudible 00:11:19] because it can plan trips for you, it can give recommendations. I recently went to DC with my daughter and it was the first time for her, and I literally asked Claude to make the trip to the monuments more entertaining and generate facts for the seven years old. So what Claude can do, it can take boring facts and then translate it into the language that will be appealing to the seven years old. It was fascinating. It was really great.
Daniel Serfaty: Kids stories are the best stories because those kids are going to use AI much more than the two of you, Dr. McCormack and Dr. Volkova, I think they're going to be much more adept at that.
Robert (Bob) McCormack: So, I have a nine and a six-year-old, and one of the things we like to do is use AI to write songs, write funny songs, or write funny jokes together. They can personalize the songs about themselves and the events that happen to them that day. So there's a way to entertain your kids. I find it really fun and creative process.
Daniel Serfaty: By the way, we are using all these terms. Our audience is probably trying to sort out Claude, Gemini, ChatGPT. These are just different versions by different companies that are doing basically what ChatGPT, which is the one that most people know, is doing. They're doing it with different formats or different expertise, but they're all based on similar technologies. If we look specifically at the past few months, the past year, I'm going to ask you for two things. What technology that you've seen that you've touched that has emerged over the past few months that really has a potential to revolutionize AI or to revolutionize work in the next few years? And the mirror image of that question, pick both or pick any one of them is what particular development didn't meet your expectations so far? Particular development that may be claimed that will be more advanced than we are today in the AI world? So both a plus and a minus to set us up for the future.
Robert (Bob) McCormack: Two things that are related that have emerged over the last year. The first is compound AI. So this is the idea of taking multiple AI models that are very good at specific tasks and combining them into a larger system that can solve higher order problems. So maybe you have a language model that's really good at understanding and interacting with humans. Maybe you have a vision model that can take imagery and make sense of it. Maybe you have a audio model that can listen to audio streams and summarize them. And all of those working together can really give you a bigger picture and solve much harder problems.
And the second piece that's related are agentic workflows. So the use of agents. When I think of an LLM, I think of it just as a tool. ChatGPT, it's just going to sit there until I prompt it, until I ask it a question. It doesn't have agency. So when we think about agentic workflows, what we're doing is we're giving these models agency and we're allowing them to take action and affect the world. And so combining those two, now we have ecosystems of agents that are working together, specialized at different tasks, can interact with each other. And there may even be like an executive agent that is really the director of everything that's telling which agents to do what, when, and taking all that information and combining it and helping people find answers.
Daniel Serfaty: So what you're saying, I'll let Svitlana add her own perspective to that, is that we moved from a model of having a single assistant, so to speak, that is very reactive and compliant to having a team of assistants. Each one of them with their own expertise, a committee if you wish, working together to not only provide you with answers, but also to initiate things.
Robert (Bob) McCormack: Exactly.
Svitlana Volkova: Yep.
Daniel Serfaty: Okay.
Robert (Bob) McCormack: I think of it as each AI model is just like a tool in the toolbox and we combine them into the Swiss Army knife and then agents are the construction crew that really know how to use all those tools in concert to build the skyscraper, for example.
Daniel Serfaty: It's a Swiss Army knife with a mind of its own, which can be a dangerous thing, but [inaudible 00:15:34] sharpness of the blades. Yeah.
Svitlana Volkova: Absolutely agree about compound AI. It's never a single model. And agentic workflows are totally revolutionizing enterprise right now and what is possible with intelligent automation. But what I wanted to add as a recent emergence, it's really emerging right now are these reasoning models, the o1 type of models beyond GPTs. So these models allow us a reason and interact with LLMs and VLMs in the way that we can trace logical pathways and try to understand what AI is thinking. That's good. And if you ask me about what else could we improve, what are the negative things that are still happening? I still fundamentally think we haven't figured out human-AI workflows. The ability for the AI models to be efficient and effective and useful for the end user hasn't been tackled yet. And let me give an example.
Stanford's recent papers published in Nature that investigate how doctors interact with LLMs specifically to perform clinical diagnosis. And they found that AI by itself is better than the doctor with AI, which is crazy, right? AI should augment the human. And the reason for AI outperforming the human-AI team is because doctors fundamentally don't know how to leverage AI and different aspects of AI. And there are other reasons. There was another study by our colleagues from Moffitt in radiology that also demonstrates that regardless the domain, clinical diagnosis or radiology, the doctors have to do some onboarding with AI. To make sure that they leverage the technology that is complementary, not detrimental to the team.
Daniel Serfaty: That's interesting because we are back to some of your earlier remarks about the human dimension. It's obvious that in order for the human-AI collaboration to work, some human-AI team formation needs to happen. We know that phenomenon for any human team, you can put four stars on the basketball court until the coalition to a team that is able to beat another team. They need to train as a team, they need to work as a team, they need to know each other's strengths and weaknesses as a team. It's an interesting part here and perhaps that's a good prompt towards the second part of our discussion here, which is going to focus more about the future and where we are moving for that. Before we go there, I wanted to talk about what have we learned the past year about the ethical or societal challenges that have arisen due to the recent development of AI. Where do you see that?
Because quite often this is the first question most people that are not AI experts like you will ask, is the use of AI ethical? Is it changing in a dangerous way, the way we create art or the way our society works? Basically, those disruptive and negative aspects of AI. These are questions that have been asked for many decades actually, not just the past couple of years, but really, really accelerated over the past year. Many people ask about the weaponization of AI and less than ethical uses of AI. So can you put a little order for our audience here about what do we know? Have we made any progress? Are some committees of scientists or politicians trying to regulate the use of AI so it's not causing more damage than it brings goodness?
Robert (Bob) McCormack: This may be the most important topic in my mind. AI is just a tool. I don't think it's inherently good or bad, but it can be used for good and it can be used for bad. That may be intentional or unintentional. I think what we've seen over the last few years is especially businesses see the cost savings of AI and are very eager to integrate AI solutions into their products, into their workflows, and maybe without fully understanding the consequences or the ethical or moral impact of AI. One example is in the job market. So people trying to find jobs, companies trying to find employees. AI systems are really good at understanding resumes, trying to assess people's skills and abilities and matching them to jobs. And we can train AI on data over the last 30, 40 years about how people work out in jobs, which people are most likely to stay in jobs over the length of their career, et cetera.
But when we look back at that data, there's also a lot of systemic biases in that data. So 30, 40 years, the workforce looked a lot different. Women were more likely to stay home and raise families. There was a lot less ethnic diversity in the American workforce. And if we train our systems on that historic data, those biases that existed 40 years ago are going to persist today. So we really need to be careful about understanding and making sure our AI is transparent and not perpetuating those sins of the past, so to speak.
Daniel Serfaty: Thank you. Svitlana, you want to add to that? I'm sure you thought a lot about this issue.
Svitlana Volkova: Let me start with that. So the sequences that Bob mentioned, which is super important. So the first one, think about information operations that what is possible in this space. With generative AI, AI is so good to generate really human-like content across modalities, across text, images, audios, deep fakes. The ability for AI to do it super easily and for us to distinguish it becomes more and more difficult. So this is definitely the negative risk for AI technologies. And the second one, Bob mentioned agentic workflows because agents can interact with your environment and your computer. With the environment online, they're kind of off the leash. So we need better ways to make sure that we look into the reliability and security aspects of these agents. We need to make sure that these agents do not delete critical data on our computers, they don't spy on us, and that we develop agents that are actually helping rather than hurting. So these are the two negative aspects I wanted to mention.
And in terms of positive, in the last two years, so many organizations have been established in AI that are looking into safety and security of generative AI and beyond. The AI Safety Institute in the US and the UK, the European Act on AI. And President Trump had multiple executive orders on speed the adoption of AI within the federal government, remove the barriers for AI development to make sure the US dominance. And finally, there was an executive order on using AI technologies in K-to-12 schools, which is great. The impact of AI on education will be groundbreaking. Obviously, we'll have to use it wisely like any technology, but a lot of good things happened.
Daniel Serfaty: Yes, I thank you for sharing both the positive and the potentially, I would say, dangerous if not negative aspect of implementing AI. I wonder if we had in our past the particular technology that was introduced, including for example, the internet, or the steam engine, or whatever. That had so many potential societal impacts that were managed at the same time that the technology was deployed, and what can we learn from that? I wouldn't let the politicians decide on that. I would love to have much more the scientific and the technology community contribute to that because they will understand better both the potential but also the limitation of those technology for AI. We need to learn from the past massive development of any technology, whether it's the automobile, or the internet, or the telephone, how it changed society, which can always be good, but without causing too much damage together.
Hello, MINDWORKS fans. This is Daniel Serfaty here. Do you love our podcast but are you short on time? Check out our MINDWORKS Minis. We've handpicked the best moments from our full episodes and packed them into bite-sized segments of under 15 minutes each. Minis are perfect for your busy schedule. Catch MINDWORKS Minis and full episodes on Apple, Spotify, Audible, or wherever you get your podcasts. Tune in and stay inspired even on the go.
I'm going to ask you now about the future. Let's talk about the next year. When I say future, the future in AI is usually a few weeks, perhaps a few hours sometimes. But let's take a chance here and look at one year and maybe if we are really brave, look at five years when we look at the future. My first question is you can share with us some insight about the trends that you see regarding the future of AI technology this year, in five years perhaps? Give us a sense of where you see things moving. You mentioned a couple of things about compound AI and agentic AI earlier. You may want to expand on that and maybe multimedia AI. Let us know where we're going to go from a technology perspective and then we'll talk more about the uses of it later.
Svitlana Volkova: In the near future, I think we'll see more and more implications of agentic workflows in the enterprise. So basically automating this document intelligence and all of the company-wide workflows with knowledge extraction, knowledge retrieval, synthesis, summarization. This is already happening, but we'll see it more and more embedded in our everyday lives. In terms of five years, I would put two bets. The first one, autonomous scientific discovery systems. I think the ability to create agents that can summarize scientific literature, analyze and synthesize data from a genomic databases and other types of multimodal data, and generate insights really rapidly will be the key. So think about the agentic workflows for the AI for science, and the second one, human digital twins. So I think right now with LLMs that are pretty good pretenders of human intelligence, we are able to use them and manipulate their attributes like personality, and reliability, competency to model human digital twins. And that will be another in the next probably a couple of years breakthrough to make sure that we are using this human digital twins to automate research workflows, or information intelligence, analyst workflows, and so on.
Daniel Serfaty: Just a clarification before we give the microphone to Bob to give his own predictions or his own bets, as you said, what is a human digital twin?
Svitlana Volkova: Very good question. So first of all, the community is still converging what we mean by the human digital twin. But I can give you a definition of the machine digital twin that was pretty widely used for plants, and nuclear plants, and so on. So it's a system that interacts with its real world counterpart. And it is informed by the real-world data. And then the digital twin creates its own data, acts in the simulated environment, and then further informs and improves the real world counterpart. And there is this constant feedback loop. So think about a digital representation of the real world that is connected to the real world and makes the real world better. And it goes both ways. So that's the system digital twin. For the human digital twin, it's a much more difficult definition, first. And second, we still don't have a good representation of a human digital twin. It's very hard. LLMs cannot model human cognition.
Daniel Serfaty: So you can imagine that in a few years if your predictions are in the right direction, that we will have a digital twin chief of AI at Aptima that will be modeled after you. And with whom you will interact almost that will simulate what you're doing and continue to evolve, to co-evolve with you.
Svitlana Volkova: Exactly.
Daniel Serfaty: And therefore, help you do your work.
Svitlana Volkova: Exactly. Think about in marketing and social media, we already see fake personas for people that are acting on their behalf and posting on social media. This is already there. I believe there are even companies that provide services to create your digital representation. It's more difficult to do it properly than like you mentioned the chief of AI for APTIMA, it's way more difficult. But already there are really cool papers from Stanford in the biomedical domain that are this visionary systems that can model. Think about the research project that we have. We have the PI, we have the biologist, we have the data scientist, the human factors engineer. And think about all of these different competencies that we can model using LLMs and put it all together. But I think what Bob mentioned that LLMs is not going to be enough. That's a compound system and it's much more than just like an LLM, which can only process text. I would refer to Bob's previous comment, and I think it's all about putting it all together in the compound system.
Daniel Serfaty: Seem to be dreaming about the future from where I sit. So tell us a little more about how you see that short term and longer term future.
Robert (Bob) McCormack: So short term, I think we're going to see AI integrated a lot more into our everyday lives. Most people have smartphones in their pockets. Most people can't live or operate in their personal, professional lives without their smartphones. I don't think there's going to be any one event, but I think we're going to see a lot of new features rolling out to our Apple phones, to our Android phones that are going to make our lives a lot easier and more efficient. So building off Svitlana's remarks on human digital twins, what we're going to start seeing is on our devices, we're going to see human digital twins of ourselves that understand our preferences, that understand our intent. And so for example, if I want to book a plane ticket now, I have to go on a lot of websites. I have to compare prices. I have to think about, "Oh, when do I want to leave? When do I want to arrive." And really spend a good amount of time looking for the right flight for me that I know is exactly what I want.
But I think what we're going to see is our phones. There's going to be AI agents that understand what I want either by asking me or just learning over time. And it's going to go find the cheap flight that guarantees that I don't have a middle seat and that I arrive by 6:00 P.M so I can get dinner at my destination and all the things that are important to me when I'm doing that. And we're going to see that in everything. And those features are going to kind of just arrive on our phones whether we like it or not. In the longer term, I think some of the big evolutions are going to come with physical embodiment of AI. So the move of AI from the virtual world on our desktops on our phones into embodied intelligence operating on physical objects in the real world.
So there's a lot of very cool research going on at companies like Boston Dynamics now that are taking large language models, integrating them into their robots and really giving them the ability to interact with physical objects to make decisions on their own and to really gather an understanding of the physical world that humans operate in all the time. And I think this is really important for the next evolution of AI. One of the important concepts of living in a physical world is there are consequences to our actions and things that can hurt us, things that can help us, things that can let us grow, things that can hold us back. And as AI interacts with the physical world, it'll really start to understand some of those consequences of its actions and really be able to make hopefully better ethical decisions in the future.
Daniel Serfaty: Well, let's keep that ethical comment on the background because I want to go back to it, but I'm trying to stitch together two ideas that you shared earlier and Svitlana your comment now about human digital twins. And I think in a sense it goes in the same direction because we know that for two people, let alone a larger team to work well in the human space, in the human teams, yes, you need to communicate with each other. And that's basically the level we are at right now with the prompts and the answer to the prompts. We learn to communicate in a language that looks like human language, but it's not exactly the same. But we need to have basic requirements for a team to operate well. We know from the science of teams that you need to have a shared mental model and a component of that shared mental model is a mental model of the environment, the space, but also the mutual mental model.
In a sense, a model that kind of predicts and anticipates your behavior and the behavior of your teammate so that you can be the best teammate you can be. You do the same with I'm sure your spouses, you do the same with your children. All human interaction or interaction between two intelligent beings is based on that concept. And so, this notion of a human digital twin is interesting because it implies basically that you'll be able to construct something that has the mental model of you. I wonder how much on the human side we have to do to have an accurate mental model of the human twins that's going to look like us. So this notion of human-AI symbiosis is interesting to explore in the future. I'm bringing this notion that you said in the beginning about in the sense the human dimension and now the digital twins technology and they meet at that level to have a more harmonious, if not symbiotic, but at least harmonious relationship with the AI devices. Whether they are virtual or physical, by the way, in our environment, especially the one that start to look like us.
Let's assume that these breakthroughs that you're talking about or perhaps natural evolution that are coming in are happening, what are the possible positive and negative outcomes of that? I mean, for example, is a human digital twin going to find a cure for cancer as opposed to us? Is AI going to do things that right now are far-fetched, have been so difficult for humans to do. And therefore, because of the design or the future designs, are we going to expect the discovery of new medicines, a level of creativity that perhaps we haven't seen until now. I'm asking hard questions today guys, so that's okay to take a chance because we are looking about enormous amount of uncertainties that are going to happen on something that is evolving very fast. So the combination of velocity and uncertainty can make us wrong, but that's okay. Take a chance.
Robert (Bob) McCormack: I think it's a good point, and anyone that is telling you that this specific thing will happen with AI, you should be very skeptical of. The real answer is none of us know where we'll be in 5 years, in 10 years, in 50 years. But I think it's safe to say there are going to continue to be a lot of breakthroughs. Some of the examples you gave, Daniel, I think are spot on, especially in the medical sector. The ability to diagnose and synthesize new medicines in simulation using AI, I think are going to be very big in the near future. One of the consequences of living in a more connected world, especially over the last 10, 15 years is the lines between work life and personal life have really blurred. It's very easy to reach your co-workers any time of day or night. I often get emails or Teams messages from co-workers at 1:00 in the morning working on reports.
So those lines have become very porous where work begins and where home life begins. But I think potentially one of the positive outcomes of human digital twins is that when I want to shut off, when I want to go lay on the beach for a week. That my human digital twin will still be active and be able to answer questions for my co-workers and dig up that report that's sitting on my computer that I forgot to share before I left. And really answer some basic questions that I don't have to worry about checking my phone every five minutes while on vacation. So I think that's one of the positive outcomes that we're going to see in the near to middle future.
Svitlana Volkova: Absolutely agree. I think the healthcare domain is one of the first that will have most positive impact. But again, we have to approach the problem with caution. And just to expand of what's possible there, think about treatment personalization, creating a digital twin of Daniel with the DNA, with previous medical history, with access to current watches and all of this fancy analytics. This could help to design personalized treatment and make much faster and better monitoring, and diagnosis, and recommendations actually allow us to move from reactive to fix the tear to prevent it. So that's for the healthcare domain. And in terms of the negative one, I wanted to go back to the potential for negative impact for generative AI in the information and cognitive domains, and cognitive security, and resilience. Think about if we do too much personalization and at scale, the generative AI allows us to do, think about societal polarization through increased personalized persuasion. AI models are really persuasive and they can create really a negative divide in this society.
Daniel Serfaty: That's a persuasion that I haven't thought about that in those terms. Are you making the argument that at some point AI will feel very confident and start being pushy basically with us. We'll move from a model of an assistant to an annoying colleague in a sense. What are you worried about there that basically the capability of AI will affect its own behavior? Is that basically something you are worrying about, Svitlana?
Svitlana Volkova: Yes. Plus think about DeepSeek. They invested in the model that was developed by the Chinese startup. We know that there are certain "implied facts," quote, unquote, that are embedded in the model. And if people, a lot of people, millions of people are exposed by that model, they can be convinced, persuaded to believe the facts, to believe what the model is saying. So that's kind of persuasion I'm talking about.
Daniel Serfaty: But I wonder if at some point AI will be self-conscious, I'm so hesitant to use those terms. Sentient maybe, about its own capability and understood that you say the power of persuasion or the power of assertion as proportional in a sense or as a function of the amount of data it has absorbed or the kind of data it has absorbed without even evil or malicious intent. Whether there is a relation between the sophistication of the AI and its assertiveness in a sense.
Svitlana Volkova: I am not a fan of anthropomorphizing AI. I don't think AI had been sentient, but it could fake really well. It definitely could fake really well.
Daniel Serfaty: Bob?
Robert (Bob) McCormack: 10 years ago, if you'd asked me about the prospects of artificial general intelligence or AI that is sentient, I think I would've had a pretty negative outlook. I don't want to say that it's going to happen or it's even possible now, but I think it's a lot more probable than it was even 10 years ago. I want to circle back, and not to harp on the negative, but I think it is important to understand the consequences, especially when it comes to things like privacy of our data, of our DNA, of our genetic code, of our creative works. I know there's a lot of discussion and viewpoints in creative arts where people's livelihoods depend on the production of movies, of art, of photography. And the ability now of AI to take over a lot of those tasks. Now, I think there is a lot of creativity lacking in AI now, but there's a lot of issues in how AI is trained and the use of art that people put out there on the web and on social media.
I have people in my family that are artists and I can go to an AI model and prompt to create a painting and the style of that person. And it's clearly been trained on their art, their creative property. And I don't know the answers here, but I know there's a lot of things we need to figure out on kind of the ethical impact of AI on human creativity and the ability for humans to support and make livelihoods in the creative arts.
Daniel Serfaty: It's interesting because I think on the arts is perhaps more accentuated, but in a sense it goes back again. Take a software designer, a software developer who wants to write a program to do a particular function, and now that person, whether that's a seven-year-old as I mentioned earlier, or a professional developer uses AI to create that program, not just to create that program, but to work with AI to create a better program, maybe less buggy, maybe more functional, et cetera. Is that the transformation of creativity or an augmentation of creativity, or is that actually a theft of creativity? And when you start taking that example and extrapolating to art, the movies of today are actually edited quite a bit with AI. Some of the production functions are done with AI, but is a story writing going to be AI? Is a character development going to be AI or AI augmented?
I think we start straddling that notion of what exactly is human creativity or are we going to have a special academy award for human-AI created movie as opposed to human without AI movies? I think it's something that as a society we have to deal with this notion of augmentation as opposed to replacement or theft of our very human nature. I'm just putting it out there because we are all concerned about that, but you as leaders in the field are working with young engineers and with your colleagues, and I wonder how the new generation thinks about that.
Robert (Bob) McCormack: I think it's really important for policymakers and people with influence in the AI world to realize that they do have the power to shape how AI impacts our lives and how our society looks in the future. And it really makes us think about what's important to us. Is it important to support creative arts? How do we shape policy and contracts with the Screen Actors Guild and actors and people whose livelihood is to be creative and to entertain us or to make us think. We have the power as AI influencers to really shape how that turns out and how we continue to support our society and how we want it to look.
Daniel Serfaty: Yeah, the creative domain, the artist domain in a sense. I think maybe one of the last one able to deal gracefully with the introduction of AI into our lives. By gracefully, I mean, in a way that is not damaging, that's extending perhaps what factory workers have understood a long time ago with the introduction of factory automation and robotics in the plants, that basically has eliminated some job, has created some other jobs, and has certainly transformed the job of a line worker on the factory floor. I think the domains are very different. We talk about the medical domain. Do you see other domains where the impact is going to be stronger in the next couple of years, whether it's finance, or manufacturing, or entertainment? We talked about all the politics, I don't know, elections, cybersecurity. Domains like that, that you think are already very ripe for a major revolution.
Svitlana Volkova: Finance, healthcare, energy market with the manufacturing automation to anywhere like Bob mentioned, where the AI agents could be integrated into the human agent workflows or see immediate impact.
Daniel Serfaty: So you think finance is one candidate?
Svitlana Volkova: Absolutely. If you look into the evolution of AI, even the pre-generative era with the kind of, they call it document intelligence automation, yes, that's definitely going to be changed. Pre-generative AI era, we have been using predictive models, large LSTMs before transformers, different types of predictive analytics in the finance domain that these are AI models.
Daniel Serfaty: One last question before I'm going to ask you for some closing thoughts. You mentioned privacy, Bob. We're living in this paradox in a sense from your remarks earlier, that the more data we provide to those AI devices, whether it's a human digital twin or an agent, the better they're going to be at what they do. It's an interesting hypothesis. And then the more data we provide, the more intimate that data is about our health, our DNA, as you say, our relationship, the nature of our relationship with our loved ones, for example, the better those AI agents working around us or living amongst us are going to be. Is data privacy going to be relevant even in the future? Because I see the race between these two or I see the data privacy regulation fighting against basically the AI thirst for more data in order to be better. I know this is a question I should ask a lawyer, but I'm asking much more intelligent people here in the two of you. So what do you think about that conflict?
Robert (Bob) McCormack: I think our concept of data privacy will continue to evolve. I think we need to really, again, understand what the impact is going to be. I think there's a lot of confusion now in the world about what happens to our data when we put it out there. One of the consequences of social media is people have been very open or some people have been very open about sharing every detail about their private life, maybe even too many details on social media. The threat hasn't always been about theft of data, but I think without us realizing it, we've been handing over our data for years if not decades. And I don't think we fully understand who controls that data, who owns that data, who's making money off our data?
Any app you use on your phone or your computer that's free, they're probably taking your data and using it to build the next app or to do something whether nefarious or good without your knowledge. So I think there needs to be a continued conversation about how do we view data privacy, what's important to us as humans as a society, to really shape that future in a way that benefits us all?
Svitlana Volkova: I envision more regulations in this space, especially because of the foundation models and generative AI. And I think, for the last two years, people became more educated and more aware of the fact that their data is out there and these models are trained on the data. So every person is making that decision like what to share, how much to share, whether to share. I know lots of people who share nothing, but it's still, I am not one of the people who are sharing much. But if you ask Claude, who is Svitlana Volkova? It can almost accurately generate things about me because there is some things about my professional life on the internet. At the same time, it hallucinates, right? It can miss my university back from Ukraine and it can miss some critical facts from my life. But overall, it can generate pretty good bio for Svitlana Volkova. So we just have to be aware that the data is over there and we can do anything about it. And we have to make a decision what to share next and whether or not to share.
Daniel Serfaty: It's a good example about that. I think there is a generational gap between folks that are now maybe your children, my children, folks that are now just coming out of college right now regarding the importance of privacy, even though they're aware that the data is being used and commercialized by third parties. The question is, it will be interesting in the future, and maybe I'll invite you toward the end of this podcast season to reflect on, you talk about mistakes or hallucinations of AI. But what if AI was making inferences about your behavior, Svitlana. Not your resume, not what you've done in the past, but make inferences about what you're going to do, and publicize that, and make that available to Bob, let's say. Or to any other person that has intentions less clean than that of Bob's. I think that's really the thing, because those ingredients of the digital twin, those ingredients about modeling a person is not just about modeling what that person has done. But it's also modeling what that person is thinking and also modeling what that person is about to do and will do in the future and will behave.
That can be disruptive on many level that we haven't even started to think about. Accessing the less than conscious part of ourselves based upon the observables of our behavior of every day is also something that is intriguing and frightening to many of us. I'd like to conclude with giving you the platform for some advice for two kind of people. I'd like for you to think about some of the more junior people entering the fray or of being workers of AI in a sense, developers of AI, maybe some advice that you will give them as they are entering the field of AI right now. And perhaps some other advice you may have for the casual user of AI. Give me each about a couple of minutes on that and give some guidance and some advice. Who wants to start?
Svitlana Volkova: I can go first. So let's start with the developers. I would say, focus on problems, not techniques. I think fundamentally understanding why I'm developing the AI system, why AI is necessary here, what components of compound AI is needed, is the key, right? Just focus on the problem. The second big advice I would like to give is understand the evaluation. AI evaluation is fundamentally broken right now. We're basically relying on public leaderboards with proxy tasks and we talk about reasoning and other abilities of AI systems that do not allow us to translate capabilities into the real-world applications and make sure that we are developing it for the end user to solve the end user tasks. So benchmarking is not going to get us far. How do we transform AI evaluation? Finally, I would say, think more deeply about human-AI symbiosis, like you said, Daniel. What are these human-AI workflows that can make the human shine, and AI shine, and the team shine together? So this three advices for the developers.
In terms of the casual user, I really want to warn the casual user, approach AI as a collaborator, not the oracle. AI makes mistakes. Be aware. Don't trust blindly. And this is, again, advice to me when I plan my everyday tasks, what to delegate to AI versus what to do yourself. So this division between what I can let AI do versus what I reserve for myself and just be an active participant in shaping how AI is used at work and at home, like with your kids versus at your workspace.
Daniel Serfaty: Wonderful. Bob, do you want to add something to this very wise word from Svitlana?
Robert (Bob) McCormack: I don't know if I can say it better, but I totally agree with what Svitlana said. And maybe I'll add, advice for developers is don't be afraid to use AI tools and think about the consequences that AI is going to have on the applications you're developing. Think about the data you're using in your AI and how people are going to use it. And just be thoughtful about when is the right time to use an AI model. When are other maybe more traditional models better, more efficient, less costly?
For the everyday user, my guidance would be to find the right balance of trust and usefulness in AI and healthy skepticism towards it. As Svitlana said, hallucinations are real. Don't always trust it, but don't be afraid to use it as a tool in your life. Just keep in mind, it's not the oracle as Svitlana said, but it's a stepping stone to solving your problems and finding your answers and really think about where the information is coming from. Think about who owns that AI and what they have to gain by you using it. And just be very careful, but be optimistic about it as well.
Daniel Serfaty: Thank you, Bob. Thank you, Svitlana. Because you took us on a journey to explore AI with your deep, deep expertise in the field, but also imagine the AI of the future and our role in it. I'm so gratified that here are two of the most brilliant scientists that I know personally, and rather than focusing on the code or the programs, et cetera, your focus has been extremely human and humane. You see words such as trust, and participation, and thoughtfulness coming out of your mouth as people who know what they're talking about on the technology side is very comforting to all of us and to our audience knowing that basically folks that are in charge of this extraordinary technology, this introduction almost of a new intelligent species in our midst is being done by people who think about it.
So make sure that your colleagues and the other leading scientists continue to focus on us as a human species here and how we can augment not just the quality of our lives, but even improve our happiness and our way to live lives. So thank you so much to both of you. You're going to have repeat invitation to this podcast. I can see that already.
Thank you for joining me today on MINDWORKS. We always appreciate your comments, and feedback, and suggestions for future topics and guests. Email us at mindworkspodcast@gmail.com. We love hearing from you. MINDWORKS is proudly produced by Aptima Incorporated. Our executive producer is Ms. Debra McNeeley and Ms. Chelsea Morrissey is our assistant producer. Our special thanks to Bespoke Podcast Editing for excellent sound engineering. And don't forget, we welcome your thoughts and ideas. Thanks again for listening.