MINDWORKS

BONUS: The Human Side of AI: How Two Scientists Found Their Way from Curiosity to Career

Daniel Serfaty Season 5

In this MINDWORKS Mini, Drs. Svitlana Volkova and Robert McCormack share with Daniel Serfaty how early fascinations became lifelong work in artificial intelligence. They describe how formative experiences, from math and optimization to philosophy and logic, sparked their interest and shaped their careers. This conversation highlights the power of cross-disciplinary thinking and the human perspective that continue to guide their work in building intelligent systems and advancing human–AI collaboration. 

Daniel Serfaty: I'm going to ask you the question that I'm actually personally curious. What sparked your interest, each one of you, in artificial intelligence?

Svitlana, you want to start?

And, Bob, this is a question I ask to both of you, so jump in.

Svitlana Volkova: Absolutely. So my fascination with AI started during my undergrad studies back in Ukraine where the two professors made significant impact on my life, Dr. [Alexander] Trunov and Dr. [Yuriy] Kondratenko, who introduced me to computer science, to optimization, fuzzy logic and early challenges in artificial intelligence. And I remember, back then, we haven't even called it. There was no subject there like artificial intelligence. We still called it optimization theory and things like that.

In addition to that, after I came to the US over 16 years ago, I started my master's with Kansas State University. And my first project I remember was in data mining and natural language processing information extraction where we looked into different disease names and disease propagation and opensource data. And that was when I really embraced, okay, this is AI. This is really like automation, and there are a lot of things that we can do with this technology.

Daniel Serfaty: It's really an international and multi-state travel for you-

Svitlana Volkova: Absolutely.

Daniel Serfaty: ... as well as a multidisciplinary one.

Bob, what about you?

Robert (Bob) McCormack: Yeah, so my first exposure to AI goes back to when I was actually pretty young growing up in Texas in the '80s. My dad worked in the oil-and-gas industry, and he was typically looking at exploration. How do we find where the oil is instead of just going out to a field and drilling random holes? And so he was a geophysicist by training, so he understood a lot about how different layers of the earth interact and how waves propagate through the earth. And I remember going on trips with him out to West Texas and collecting data when I was a young kid. And they'd have these big shaker trucks that would induce seismic waves into the ground. And they'd have arrays of seismophones or seismographs all around, and they would measure how those waves propagate down into the earth and back up and, from that, try to figure out where's the oil under us.

A lot of his work was in kind of the R&D sector of oil-and-gas exploration. And he started doing a lot of early work in using neural networks to take that really complex, noisy data from seismographs and try to understand what does that mean in terms of what's under our feet, where is it most likely to have oil. And, really, he was at the tip of the spear in a lot of early AI work in the oil-and-gas industry. That really sparked my interest early on. And then, going into undergrad, I really wanted to understand more. And I ended up doing an undergrad thesis on using neural networks, in that case, to understand how we could use that to control the behavior of an avatar in a game and really allowed me to take a deeper dive into the area. A lot of those early experiences and what we'd probably call today pretty rudimentary AI really sparked my interest and gave me a desire to find out more and really get into the field.

Daniel Serfaty: It's fascinating because both of you have a Ph.D. But Svitlana, you were motivated by intellectual curiosity about combining disciplines. And, Bob, you wanted to find oil in the ground in the sense of to help your dad do that, so it came out of a real need that, eventually, prompted you to do graduate studies in applied math. That's fascinating.

So, today, you have formal classes, as you said, Svitlana. You can have formal curriculum. You can graduate with a degree in AI. But, if you're thinking about a single class or a single discipline that actually pushed you towards more directly, literally, studying AI or AI-related courses or fields of inquiry, what would that be?

Svitlana Volkova: I might sound old-fashioned, but it's still fundamental in math and optimization. I think, this ground, foundational knowledge that allow you to understand how it actually works, statistics, that's the key. However, in 2025, I would like to add that this is necessary, but not sufficient because, right now, I think the real breakthroughs are going to be coming from, again, the cross-disciplinary contributions to AI technology development, neuroscience, cognitive science, human factors, because the way that we interact with AI is fundamentally understudied. And we need to study that, so it's still a multidisciplinary framework that we have to apply in order to call ourselves AI experts.

Daniel Serfaty: Yes. That's very interesting. What about you, Bob? Can you point out the single course or a single class you took either undergrad or graduate that eventually pushed you towards being more and more curious and now leading the whole division focusing on AI?

Robert (Bob) McCormack: I actually remember specifically in my sophomore year of undergrad, I was taking three courses at the same time. One was a mathematics course in logic. One was a computer science class, and the third was a philosophy class. And I really wanted to take the philosophy class just as something outside my major that I was interested in. But what's really interesting was to see how those three fields lined up. And it really got me thinking about the way we structure knowledge, the way we ask questions, and what it means to be intelligent, what it means to think. That really pushed my interest in understanding how do humans think? And how is that different than computers think? And are they converging? Are they diverging? And, really, how do we approach these difficult problems from both perspectives?

Svitlana Volkova: I want to just like to piggyback on Bob's answer about philosophy. Right now, when we're trying to study ethics of AI and the implications of AI technology on ethics and vice versa, philosophy is the key. And I didn't get a chance to really combine it. I don't think it was in earlier curriculums that Bob and I were exposed to. But, right now, I think it's the key. Bob is absolutely right that this philosophical knowledge and thinking about societal impact of AI technology is the key. It's fundamental and needed, but looking to programming.

I have a seven-year-old. It's a she. I have a daughter. And, obviously, as a computer scientist, I wanted her to be a good programmer and a good coder and have this logic and the ability to think structurally in her head, but, right now, looking to what AI can do with programming. Right? So, Bob and I, we really like took programming classes and computer science. But, right now, AI can write code for you. And it's pretty good. Instead, I think philosophy is more important right now so we all think, when we do something, what would be the consequences of our actions.

Daniel Serfaty: Fascinating stuff. I agree. I had a conversation this week with the dean of the nursing school of a major university here in the Boston area. And we talked obviously about AI, not about nursing, but I was very curious to see how nursing is changing as a result of AI. And she might be actually a guest in a new future podcasts here, because fascinating. And she said something so similar to what you just said. She said, "From a curriculum perspective, because nurses are receiving all kind of information coming from AI, complex data analysis, whether it's an MRI result that is being fused with some X-ray and they need to recommend something." And she said, "I think I'm going to introduce two new classes for my nurses, as if they needed more classes, but two new classes, one of them," she said, "in ethics and philosophy beyond what they study. They study a little bit of medical ethics, but it's beyond that now, and, two, critical thinking, to see the ability to understand how to combine recommendations that come from an artificial intelligence together with your own expertise and how to think critically about that."

And I thought it was fascinating because it's almost like the pendulum is swinging back to human sciences. And that is perhaps a trend for the future. We'll talk about trends for the future in a second, but let me ask you today. Both of you are deep into AI all day long in everything you do, in every person you supervise, in every customer you interact with. So what do you find most rewarding for you as senior scientist? What do you find most rewarding about your current work in AI? What gives you satisfaction, pleasure even? That's a deep question. It's personal, but what makes you happy about your work in AI these days?

Robert (Bob) McCormack: I think, from my perspective, really seeing how we can make people's lives easier in the jobs that they do. A lot of AI over the past few years has really turned to automating cognitive tasks, so reading and understanding large documents, summarizing and helping to write reports, things like that. What we see is a shift from kind of lower-level thinking about processes and "how do I approach this" to really moving the thinking up to a higher level to really focus on, "Why am I doing this task? What is the impact of this task". And it allows us to really get a lot further with a lot less time in our jobs.

Daniel Serfaty: You are improving the human condition.

Robert (Bob) McCormack: We're trying.

Daniel Serfaty: Okay.

Svitlana Volkova: Yeah, and I would totally agree. We actually built... And that's what really excites me and gives me dopamine. We built what can change how people work and solve previously intractable problems. Right? And, as Bob mentioned, a lot of innovation and the change happened in the cognitive task simplification and cognitive augmentation. Let's put it this way. Just to give a few examples, if you think about information operations analysts, them creating the ability to have and give them the proactive defense using generative AI models so they can easily understand what's happening in the information environment at scale previously unimagined, and develop the reputational armor to defend, to make sure that what they experience in the information environment is less vulnerable and more resilient. That's fascinating. Humans won't be able to do it without artificial intelligence.

Or, another example, think about how humans and AI systems work. And we have a DARPA effort. It's called Exploratory Models of Human-AI Teams. Having the ability to model AI characteristics and human characteristics and simulate how these systems work with the goal to optimize the system, the human AI system, and understand what are the properties of the human and what are the properties of AI that affect teaming is the key, and do it at scale. Instead of five people, do it at large scale, and being able to manipulate and see how these manipulations affect the end goal is really key. And the AI can help us do that.

Daniel Serfaty: Good. Well, let's jump. I mean, these are wonderful examples. And, by now, the audience know my personal bias about the human side of the equation. And you guys, as Ph.D.s in computer science and mathematics, you even reinforce that now, that AI is necessary, but the sufficient condition to be necessary and sufficient will come from the human side, perhaps, of the equation.