.png)
MINDWORKS
Join Aptima CEO, Daniel Serfaty, as he speaks with scientists, technologists, engineers, other practitioners, and thought leaders to explore how AI, data science, and technology is changing how humans think, learn, and work in the Age of AI.
MINDWORKS
Mini: When AI Stops Waiting for Your Prompt: Agentic Workflows
AI is moving from passive assistant to proactive teammate. In this MINDWORKS Mini, Daniel Serfaty and Drs. Svitlana Volkova and Robert McCormack explore the rise of compound AI and agentic workflows: systems of specialized AI agents that collaborate, reason, and take action on their own.
Discover how this shift could transform work, amplify human-AI teaming, and reshape the ethical landscape of emerging technology.
Daniel Serfaty: If we look specifically at the past few months, the past year, I'm going to ask you for two things. What technology that you've seen that you've touched that has emerged over the past few months that really has a potential to revolutionize AI or to revolutionize work in the next few years? And the mirror image of that question, pick both or pick any one of them is what particular development didn't meet your expectations so far? Particular development that may be claimed that will be more advanced than we are today in the AI world? So both a plus and a minus to set us up for the future.
Robert (Bob) McCormack: Two things that are related that have emerged over the last year. The first is compound AI. So this is the idea of taking multiple AI models that are very good at specific tasks and combining them into a larger system that can solve higher order problems. So maybe you have a language model that's really good at understanding and interacting with humans. Maybe you have a vision model that can take imagery and make sense of it. Maybe you have a audio model that can listen to audio streams and summarize them. And all of those working together can really give you a bigger picture and solve much harder problems.
And the second piece that's related are agentic workflows. So the use of agents. When I think of an LLM, I think of it just as a tool. ChatGPT, it's just going to sit there until I prompt it, until I ask it a question. It doesn't have agency. So when we think about agentic workflows, what we're doing is we're giving these models agency and we're allowing them to take action and affect the world. And so combining those two, now we have ecosystems of agents that are working together, specialized at different tasks, can interact with each other. And there may even be like an executive agent that is really the director of everything that's telling which agents to do what, when, and taking all that information and combining it and helping people find answers.
Daniel Serfaty: So what you're saying, I'll let Svitlana add her own perspective to that, is that we moved from a model of having a single assistant, so to speak, that is very reactive and compliant to having a team of assistants. Each one of them with their own expertise, a committee if you wish, working together to not only provide you with answers, but also to initiate things.
Robert (Bob) McCormack: Exactly.
Svitlana Volkova: Yep.
Daniel Serfaty: Okay.
Robert (Bob) McCormack: I think of it as each AI model is just like a tool in the toolbox and we combine them into the Swiss Army knife and then agents are the construction crew that really know how to use all those tools in concert to build the skyscraper, for example.
Daniel Serfaty: It's a Swiss Army knife with a mind of its own, which can be a dangerous thing, but [inaudible 00:15:34] sharpness of the blades. Yeah.
Svitlana Volkova: Absolutely agree about compound AI. It's never a single model. And agentic workflows are totally revolutionizing enterprise right now and what is possible with intelligent automation. But what I wanted to add as a recent emergence, it's really emerging right now are these reasoning models, the o1 type of models beyond GPTs. So these models allow us a reason and interact with LLMs and VLMs in the way that we can trace logical pathways and try to understand what AI is thinking. That's good. And if you ask me about what else could we improve, what are the negative things that are still happening? I still fundamentally think we haven't figured out human-AI workflows. The ability for the AI models to be efficient and effective and useful for the end user hasn't been tackled yet. And let me give an example.
Stanford's recent papers published in Nature that investigate how doctors interact with LLMs specifically to perform clinical diagnosis. And they found that AI by itself is better than the doctor with AI, which is crazy, right? AI should augment the human. And the reason for AI outperforming the human-AI team is because doctors fundamentally don't know how to leverage AI and different aspects of AI. And there are other reasons. There was another study by our colleagues from Moffitt in radiology that also demonstrates that regardless the domain, clinical diagnosis or radiology, the doctors have to do some onboarding with AI. To make sure that they leverage the technology that is complementary, not detrimental to the team.
Daniel Serfaty: That's interesting because we are back to some of your earlier remarks about the human dimension. It's obvious that in order for the human-AI collaboration to work, some human-AI team formation needs to happen. We know that phenomenon for any human team, you can put four stars on the basketball court until the coalition to a team that is able to beat another team. They need to train as a team, they need to work as a team, they need to know each other's strengths and weaknesses as a team. It's an interesting part here and perhaps that's a good prompt towards the second part of our discussion here, which is going to focus more about the future and where we are moving for that. Before we go there, I wanted to talk about what have we learned the past year about the ethical or societal challenges that have arisen due to the recent development of AI. Where do you see that?
Because quite often this is the first question most people that are not AI experts like you will ask, is the use of AI ethical? Is it changing in a dangerous way, the way we create art or the way our society works? Basically, those disruptive and negative aspects of AI. These are questions that have been asked for many decades actually, not just the past couple of years, but really, really accelerated over the past year. Many people ask about the weaponization of AI and less than ethical uses of AI. So can you put a little order for our audience here about what do we know? Have we made any progress? Are some committees of scientists or politicians trying to regulate the use of AI so it's not causing more damage than it brings goodness?
Robert (Bob) McCormack: This may be the most important topic in my mind. AI is just a tool. I don't think it's inherently good or bad, but it can be used for good and it can be used for bad. That may be intentional or unintentional. I think what we've seen over the last few years is especially businesses see the cost savings of AI and are very eager to integrate AI solutions into their products, into their workflows, and maybe without fully understanding the consequences or the ethical or moral impact of AI. One example is in the job market. So people trying to find jobs, companies trying to find employees. AI systems are really good at understanding resumes, trying to assess people's skills and abilities and matching them to jobs. And we can train AI on data over the last 30, 40 years about how people work out in jobs, which people are most likely to stay in jobs over the length of their career, et cetera.
But when we look back at that data, there's also a lot of systemic biases in that data. So 30, 40 years, the workforce looked a lot different. Women were more likely to stay home and raise families. There was a lot less ethnic diversity in the American workforce. And if we train our systems on that historic data, those biases that existed 40 years ago are going to persist today. So we really need to be careful about understanding and making sure our AI is transparent and not perpetuating those sins of the past, so to speak.
Daniel Serfaty: Thank you. Svitlana, you want to add to that? I'm sure you thought a lot about this issue.
Svitlana Volkova: Let me start with that. So the sequences that Bob mentioned, which is super important. So the first one, think about information operations that what is possible in this space. With generative AI, AI is so good to generate really human-like content across modalities, across text, images, audios, deep fakes. The ability for AI to do it super easily and for us to distinguish it becomes more and more difficult. So this is definitely the negative risk for AI technologies. And the second one, Bob mentioned agentic workflows because agents can interact with your environment and your computer. With the environment online, they're kind of off the leash. So we need better ways to make sure that we look into the reliability and security aspects of these agents. We need to make sure that these agents do not delete critical data on our computers, they don't spy on us, and that we develop agents that are actually helping rather than hurting. So these are the two negative aspects I wanted to mention.
And in terms of positive, in the last two years, so many organizations have been established in AI that are looking into safety and security of generative AI and beyond. The AI Safety Institute in the US and the UK, the European Act on AI. And President Trump had multiple executive orders on speed the adoption of AI within the federal government, remove the barriers for AI development to make sure the US dominance. And finally, there was an executive order on using AI technologies in K-to-12 schools, which is great. The impact of AI on education will be groundbreaking. Obviously, we'll have to use it wisely like any technology, but a lot of good things happened.
Daniel Serfaty: Yes, I thank you for sharing both the positive and the potentially, I would say, dangerous if not negative aspect of implementing AI. I wonder if we had in our past the particular technology that was introduced, including for example, the internet, or the steam engine, or whatever. That had so many potential societal impacts that were managed at the same time that the technology was deployed, and what can we learn from that? I wouldn't let the politicians decide on that. I would love to have much more the scientific and the technology community contribute to that because they will understand better both the potential but also the limitation of those technology for AI. We need to learn from the past massive development of any technology, whether it's the automobile, or the internet, or the telephone, how it changed society, which can always be good, but without causing too much damage together.