MINDWORKS
Join Aptima CEO, Daniel Serfaty, as he speaks with scientists, technologists, engineers, other practitioners, and thought leaders to explore how AI, data science, and technology is changing how humans think, learn, and work in the Age of AI.
MINDWORKS
The GenAI Education Revolution with Andy Van Schaack, Yair Shapira, & Svitlana Volkova
Join MINDWORKS host Daniel Serfaty as he talks with Drs. Andy Van Schaack, Yair Shapira, and Svitlana Volkova about how Generative AI is fundamentally changing education and learning in today's digital world.
This podcast is a follow-up to (not a repeat of ) a panel discussion led by Daniel on Generative AI and training at I/ITSEC 2023.
Daniel Serfaty: Artificial intelligence, generative AI, large language models, GPT, our world has dramatically changed over the past 1.5-year. As a result, generative artificial intelligence or AI has been an extraordinary disruptor in many domains of interest to MINDWORKS listeners such as education and training, which we'll explore today.
My three guests in this episode are pretty extraordinary themselves. Each one, the recognized expert in their field and even within that field, each one having chosen AI as their focus. First, Dr. Andy Van Schaack is a professor at Vanderbilt University and he was a founder and chief scientist of several Silicon Valley-based companies where he earned a dozen patents for educational technologies. Dr. Yair Shapira is an esteemed serial entrepreneur and founder of two companies in the EdTech domain, Amplio Learning and NovoDia, who are leading providers of educational platforms in the K through 12 space. Dr. Svitlana Volkova is a chief computational scientist at Aptima and a leading thinker in the field of human-centered AI.
Today's podcast is a follow-up to a panel discussion on generative AI and training that we led at an event called ITSEC, a large conference at the end of 2024. A lot has happened in the past four months. This is a field that is moving as a speed of light. So for you who attended that panel, keep listening, I promise you're going to learn something new. For everyone else, sit down and listen and be prepared to learn a lot of new ideas.
Andy, Svitlana, and Yair, we have here three experts really, that are living not just learning, but living AI on a daily basis. I wonder, especially, in the past 1, 1.5-year, whether you can describe an aha moment that you had when either playing or practicing or developing things dealing with generative AI. A moment when you say, "Wow, I didn't expect that, or that's going to change everything." Andy, I'll start with you.
Andy Van Schaack: I'll give you two examples, Daniel. The first was late in November of '22, and that was the day that ChatGPT first came out. I remember I'd heard a little bit about it in the news and I played with it, and instantly I thought, "Well, there goes the Turing test." You can get specific about it, but it was pretty obvious how impressive it was. I can remember it was a Wednesday because I had Thursday to play all day, and then in class on Friday, I didn't cancel class. Everyone had to show up, but we didn't talk about what the lessons were for the day that I'd planned. We talked about AI the entire time. I just shared with them this amazing moment in my life and hopefully try to create a little bit of that same feeling within them as well. I would say really every day since then, I'm in the field of cognitive psychology and maybe once every six months a paper comes out in some journal, and in some article I think, "Wow, that's really pretty clever. That's an interesting psychological phenomena."
But I'd say every single day something new happens in the world of AI that makes me say a word that I'm not going to say on this podcast, but holy mackerel, it's just jaw-dropping. Even this morning there were two more and there are things that just fundamentally caused me to say, "This is just amazing." This alone. There doesn't need to be any more development for this to be the most amazing technology we've ever seen. Yet the hits keep coming. That's why I think I love AI so much is just from the standpoint of dopamine coursing through my brain every day is just an amazing rush.
Daniel Serfaty: You seems to have a lot of fun in your intellectual life having that kind of excitement, especially at this stage of your career when you've seen a lot.
Andy Van Schaack: I tell that to people. I tell them, "This is the most exciting time of my personal and professional life." There's just so much opportunity now, and I think such an amazing opportunity to contribute. If you think about whatever area of expertise you have, remember I talked about that Venn diagram, match what your area of expertise is with the use of AI, and in that intersection, that's where you can do some pretty remarkable things.
Daniel Serfaty: Svitlana, did you have an aha moment at some point?
Svitlana Volkova: Absolutely. I think I'm still expecting for my personal aha moment of my life, but I think in my professional career, obviously. But I agree that the GPT is like two years ago, November 2022, right when ChatGPT and other GPTs became the norm, I guess, in our life. What I think was impressive to me, not the actual model. Because we have been working on the generative transformers even before ChatGPT became such a big deal, and it was a big deal for a reason. It changed our lives, I think tremendously.
What was the aha moment for me was that brilliant social experiment that the OpenAI did, opening the system to the whole world and collecting human preferences and human interactions with the model, which reminded me that was my aha moment. The human-centric approach to AI development, right? Basically, they collected the perceptions and preferences of all cultures, many people all over the world to improve the model that the model can appeal to our reactions, perceptions, our cultural backgrounds, and the way that we interact with AI technologies.
So that was my personal aha moment. Another one I would like to mention was this large-scale, pre-trained models they're called either foundation models or frontier models. Engineering is crucial to scientific innovation. The way that you train this model, the way that you pre-process, post-process, and all of that is the key. How you work with your data in the beginning and how you analyze the outcomes of your pre-trained models is extremely important. That was the second aha moment for me that engineering should be very, very much not even in support of scientific innovation, but maybe even before it.
Daniel Serfaty: That's great. Thank you. Yair, I guess that you're such a practical PhD type that you went from aha moment to building a business in three seconds. What caused you to redirect your attention and your considerable entrepreneurial skills in that direction? It must have been a sense that something's about to change and you wanted to be part of it.
Yair Shapira: Exactly that, Daniel. I think we all had anecdotal aha moments. Probably, like Andy said, every time we say something to GPT, we are just amazed. I remember how shocked I was when I saw myself saying please to GPT and trying to bribe GPT and offering $100 if they give me the right answer or the answer that I want to be right. Every day we are having this small aha anecdotes. But for us entrepreneurs, like you mentioned, Daniel, the aha moment is when we match a pain in the solution, and this time around it was a bit different because the solution came before the pain before we understood what the pain was, it was clear that this is going to be an earthquake. It was clear that this is going to impact the world of content or all worlds of content, but what exactly is the pain that this can solve? I don't like incremental pains. I like big profound sufferings.
Daniel Serfaty: That would be a different podcast.
Yair Shapira: Yeah. Having a bit of time, I went to ask what is the systemic pain. What is the systemic pain for schools, for school systems, for school districts? What is their biggest content pain? I was quite amazed to know that the biggest pain is that they're doing everything they can to bring high-quality content into their schools. They're spending tens of millions of dollars. They're selecting, creating, curating, polishing, distributing everything they can, adding more and more systems, more and more content, more and more resources.
But eventually, what happens in the classrooms is totally different. Teachers just do not use the district content that is high-quality, evidence-based [inaudible 00:08:44] that you want your kids to use. What teachers are doing in the classroom is that they go to YouTube or Kahoot! or Teachers Pay Teachers or other sources and source their own content. It's a bit strange. Because they're spending time that they're not getting paid for it. So we had to ask them, "Why do you do that? You get such excellent resources from the school districts." This was the biggest pain for school districts.
The school district is a quality assurance entity. It's a mini regulator, and they cannot really exert the regulation this way. We went to teachers and asked them, "Why do you do that?" They by far said that the number one and two cause is that content that they receive from the district is not engaging and that it does not fit the variety of students, the diversity of students that they have in the classroom, and that they're embarrassed to even show it to the kids.
The younger the teachers are, the more likely they are to source content not from the district. This is a systemic pain. The question is whether generative AI can somehow harmonize the need of the district to assure quality and the need of the teacher to make sure that this is highly engaging, highly personalized, not embarrassing content.
Andy Van Schaack: There was something that you said just a moment ago that helped me to realize another aha moment, and I had the same thing as you do. I'm very polite to ChatGPT. It's funny when I give demos and then people afterwards ask me, "Why do you say please and thank you?" I'll even say, "Hey, you were really helpful." I say, "Well, one, it's the way I was raised." So talking to my mother about that. The other thing is, and I'm sure that both of you and Svitlana know this is that when you prompt using things like, "This is really important to me or think step by step," we see some pretty phenomenal differences in outcomes of the quality of responses.
I think one of my aha moments was the realization that maybe the people who are best trained to study and improve large language models are not necessarily computer scientists, but are cognitive psychologists or psychologists in general. Because a lot of the same phenomena that we see in human beings, like the recency effect, the primacy effect we see in large language models, and perhaps not surprisingly. But I think really understanding cognitive psychology and how to work with human beings to help them to learn, helps to inform the way I think about how I communicate with large language models and collaborate with them to get the best results.
So I guess I'm like you, I'm polite by habit, but I think the ways that we communicate with others shape the way that we communicate with large language models and then influence the quality of the responses we get.
Daniel Serfaty: You guys anticipated a point I wanted to make and I wanted to pursue that question really for you, Svitlana, because you were the first who mentioned that today. But we talk about cognitive psychology and being polite and interacting, and my question is whether this is not an opportunity for us human beings to get a deeper understanding of the human side that this AI is designed perhaps, but suppose at least to support in the field of education, how much do we need to know about, as Yair say the teacher, as Andy say the learner, the person who learned from it in order to design better system? Svitlana, if you can expand on that, and, Andy, you wanted to add something there.
Svitlana Volkova: Absolutely. I think we don't learn enough. We know a lot, but I think we don't know enough. With the pace of AI development that is happening right now, I think this human aspect of learning how humans learn will actually transform how AI learns. That's my hope. I think this is going to happen. Humans do not learn by reading terabytes of the data on the internet regardless whether it's multimodal images, text, and speech and videos. But that's not how we learn. We learn through observation, through trial and error, through making mistakes, through communication, which is the most important with other human beings through social interaction and many other ways.
We're still studying how our brain processes information and all of that. I absolutely agree with Andy that the multidisciplinarity of this problem is crucial, and this is going to change how humans and technologies, and we call this technologies AI right now, but we'll call it something else, maybe AI causal models or something else in the next couple of years.
But how humans, technologies like AI-powered automation, and actually autonomous systems, I'm not saying that humans will be replaced by AI technologies in the near future especially, but some parts of our workflows can be automated. Like Yair mentioned, quantum generation, for example. Generative AI has a great power to simplify our life in quantum generation and generate very good quality, very engaging for learners and for teachers content and very effective content. Just to summarize that, I see a critical potential of the human and the way that humans learn for AI technology development. I also don't see that the technology will replace AI.
I think they will augment AI and there will be, as I mentioned before, this systems of systems where we do have humans. Humans are critical. We have the human AI partners and we also have automation that there's part of our workflows will be automated.
Daniel Serfaty: Thank you. Yair and Andy, I'll give you an opportunity to add to this question of the human side, the learner, the teacher, the curriculum developer, how important it is to understand that side to optimize the entire system before we move into our next question.
Andy Van Schaack: I think this is the critical question. If my students in class, if I want to get their attention, I always say, "Oh, yeah, I remembered a question ] in the midterm," and I get everyone's attention. If I could get your listeners' attention, it would be right now, it's this. If you want to enhance human being's ability to acquire, retain, and retrieve knowledge and skills and to transfer them to new environments, you need to think about cognitive psychology. We have 100 years of great empirical research on how to do that. We've been spending all of our time trying to reverse engineer the brain and really trying to understand strategies we can use in teaching and learning to optimize that process. So that's the first thing I think about when I think about developing AI-enhanced technologies. So what I'm working on right now is a paper that should be done.
Well, it needs to be done before I present at a military conference in mid or late April because I'm going to be presenting the results. But it's essentially this idea that I took a look at 10 conventional instructional activities and assessments. I said, "What are the instructional objectives for each one? Why do we do these things that we do in K-12 teaching and in military training for instance?"
Well, if we understand the instructional objectives, we can say, "How can we create an AI-enhanced activity that achieves the exact same instructional objective, but also engages the learner in the information processing, the thinking that leads to better acquisition, retention, and transfer." So what I'm looking at is every one of those AI-enhanced activities for specific cognitive processes. So these are things that are well understood, but retrieval practice, space practice, elaboration, those kinds of things.
In this paper, because I think people need tangible examples, I'm going to have 20 pages in the abstract of specific instructional activities and assessments that are AI-enhanced that achieve the same instructional objectives that we want in our classrooms today, but that cause students to do more of the thinking we want. It's really my response to the complaint that many educators have that they say, "If we give our students activities to do if we give them assignments, they're just going to use ChatGPT to provide the answers."
I said, "Yes, absolutely. That's what I would do if I was a student." They're afraid that students' brains are going to turn to mush. I said, "Yeah, that's if you give them the assignment from 2023, but if you give them the AI-enhanced activity that was designed specifically to engage them into more effective information processing, then, yes, they're going to develop AI-enhanced skills which are important for the workplace, plus they're going to use retrieval practice, space practice, elaboration, and do more thinking." Daniel, for me, it's psychology first, technology second.
Daniel Serfaty: I am taken by the enthusiasm that I feel in all of your remarks, all three of you as something that is really changing your lives and our lives, and you took it to the extreme when you said this is the most significant innovation disruption in both your personal and professional life, and I wonder why is that? Why did the invention of the car or the airplane or the internet not provoke the same enthusiasm?
Is that because it's almost like, A, we are discovering a new species, a new animal that we can communicate with? It's a little bit like us, but a little different too. Is that why that excitement, we're discovering a new intelligence that is living around us, or is it something else?
Andy Van Schaack: I feel like it's a superpower. I feel like I am way more creative. I'm way more productive using this technology than I ever was before. I feel like it's a superpower. If somebody took away my cell phone, if somebody took away my laptop, I would feel like, "Wow, I don't know how I'm going to get my work done." I feel the same way about AI now that if somebody took away my ChatGPT or my Claude or whatever I was using, Midjourney, I would feel like, "Wow, I'm back to the old days with just my brain." It's super exciting because so much of my work revolves around creativity and innovation, and this just helps me to do it so much better.
Daniel Serfaty: Cognitive prosthetics, isn't it?
Andy Van Schaack: Yeah, for sure.
Daniel Serfaty: Yair and Svitlana, you want to chime in on this notion of why do we feel that [inaudible 00:18:48]? I want our audience to sense from you as experts. Why do we think it's to use a cliche paradigm shift here?
Yair Shapira: Yeah. I think it revolves around the several elements. The first one is the very thin layer between technology and application for AI. It's like sitting on the engine driving a car rather than having a car. I mean, if my mom uses GPT on a daily basis, she's actually using a neural network that's not her own that she's using, but she's using a technology and AI technology every day. I think that's the thinnest ever application that we've seen.
Second is I think it's just shockingly effective. I'll give you an example. I haven't seriously coded for decades, but when I started my new company NovoDia, my homemade proof of concept was basically a Google sheet that somehow was loosely connected to ChatGPT and to Dall-E at the time. What it practically did was to create a slideshow that is tailored to... I'm talking about probably March '23. It created a slideshow that was tailored to a specific student, so it could be on any instructional topic.
Daniel Serfaty: What students for our audience are you talking about? Middle school, high school-
Yair Shapira: K through 12 students.
Daniel Serfaty: Okay.
Yair Shapira: I was looking at public schools, K through 12 students, and I was trying to tailor content to the needs of a specific student, whether it's according to their interest, skill levels, backgrounds, and so on. Think of the American Revolution slideshow for a student who is very interested in superheroes.
It took me one day to create, just imagine within one day I created a working application. I'll tell you what it was good enough for. It was good enough to secure a design partnership with the school district. It was good enough to get the first funding for the company. It took one day and it was only 10 months ago, and just look what happened in the last 10 months.
Daniel Serfaty: If you didn't have that to develop the same slideshow with multimedia about the American Revolution in that particular case, how long will it take a capable teacher to do that on their own without the use of generative AI?
Yair Shapira: It would take a squad of people probably a week to reach the same result. You'd need someone who is an expert in the American Revolution, someone who is an expert in pedagogy, someone who understands diversity, inclusion, and equity, someone who is a designer, and maybe a QA person, and all of that are represented by AI agents that work together. Very quickly, within a minute or two, they create dazzling content that for the first time is hyper-personalized, not only to the skill level of the student, which is very common today, but also to the background, to the interest, to the culture of the student, and think how important it is when today people understand that in a classroom of 30 kids, every student is different.
Daniel Serfaty: We'll go back to that because one of the questions for us to explore for our audience is a notion of by what measures are we measuring our success in a sense that how do we know it works and what are we measuring in order to show that it works and it works better? We'll go back to that question in a second. But I would like Svitlana as simply as you can to explain to our audience how do large language models, which is really the technology that underlies most of generative AI how does it work? What is it that they do in order to produce that effect and make Andy and Yair so happy?
Svitlana Volkova: Large language model is a statistical model that predicts the next word. The simplest way to explain it would be to remember what we had on our phones even 10 years ago, I would say, it was a language model. It wasn't a large language model, but when you type your text, you see that there is a suggestion for the next word for you to type.
I implemented my first language model back more than, I don't know, 15 years ago back at Hopkins during my natural language processing course. It was a very simple language model, which wasn't a large language model. What makes it large is the terabytes of data, of text data that the model is learning from the internet books and the web pages, and that the model is just learning and learns the associations between the words, what words go together. Then when we interact with the model, it generates text for us based on what it learned.
Daniel Serfaty: What's a terabyte? For our audience, it sounds like a very large amount. I mean, what are we talking about? The library of Congress? Are we talking about-
Svitlana Volkova: Oh, way more. We're talking about what the most recent models have been trained on GPT-4, which is multimodal images and text, and Claude III, which is again a multimodal model. We talking about the internet. I'm trying to remember the actual statistics. If the language model learned for 24 hours a day, it would take... If we try to compare it to human learning, it would take a human to learn for 80,000 years or something like that. That's the comparison to ChatGPT. So it's impossible. There is no way we can leave that alone, but that's the comparison of how we would read this much data and we would read the terabytes. That's what the terabytes mean.
Daniel Serfaty: It's just amazing to me that an algorithm that is designed to predict the next best word is able to generate things that makes us happy, interested, surprised, et cetera. Andy, you want to add something to that deep dive into the engine?
Andy Van Schaack: Yeah. Well, I think when I'm sitting around with my buddies and we're drinking beer and we're talking about AI, which tells you what nerds I hang out with, and you hear people criticize large language models is stochastic parrots, all they're doing is just predicting the next word. I think what we come to the conclusion of is I'm not so certain that the human brain isn't that much different. Why does it surprise us that? We develop these neural networks to imitate and understand how the human brain works and then we find this emergent phenomena, and that's the part that really creates the aha moments for me.
One of the courses I teach at Vanderbilt is on technology forecasting, how to predict and plan for the future. One of my favorite approaches, perhaps not surprisingly, is trend extrapolation. You look at logarithmic exponential logistic functions, S-shaped curves to try to predict the future, but we see these step functions in capabilities as we go from GPT-3 to 4, and I anticipate 5 at the end of the summer. Because an emergent phenomena appears. I think, Svitlana, your models probably did a pretty good job statistically with the amount of compute power that you had, the amount of training data that you had, the types of algorithms that people were using. It would do that.
Then when you throw enough of those three things at a model, suddenly it has a qualitative difference in its capability. I think we haven't even gotten close to reaching some ceiling effect with compute power. I'm sure you saw what NVIDIA just announced with their Blackwell chip a couple of days ago. You look at what Groq is doing with their Inference chip, you're looking at training data and synthetic data algorithms. I just read about one called RWKV. So I think we're going to see some more step functions, qualitative differences, not just in predicting the next words, but really things that look at reasoning increasingly more so than we've seen today.
Daniel Serfaty: Yes. Even philosophically, let alone practically, this equivalence between quantity and quality. As you said, maybe our quality of thinking, our creativity comes from what you just said, incredible access to data, and what Svitlana just described to us. But that's for the philosophers to resolve at some point.
Andy Van Schaack: I mean, if you look at how large animal brains are, you go from a mosquito to a mouse, to a dog, to a dolphin, to a human, and as brains get more and more complex, we have greater and greater cognitive functions. Just AI doesn't have the limitation of cranial capacity. That's why I think what's going to be really remarkable is just how much more intelligent these systems can be than ourselves. Not if we're going to achieve AGI, but when we blow right through that into artificial superintelligence, what is that really going to look like?
Svitlana Volkova: Yes. But it's not only the size of the brain, it's how we use it. Then we use it all together as collective intelligence. We know a lot about how the brain works, but we also don't know a lot. Because neural networks, we are trying to mimic the model development similar to the brain structure. There have been many suggestions that try to mimic specific parts of the brain and the functions which didn't really work and didn't scale the capsule networks, for example. So I agree with you that this combination would fire, the combination of brilliant engineering, a brilliant model, the right data that was applied to the right problem.
What I think is important is this notion of rapid adaptation. What language models allowed us to do is to develop one big brain that can be rapidly adapted to many tasks Yair, for example, mentioned content generation, knowledge synthesis, a multimodal content creation, and things like that. One big brain that we used, not we, the industry used to spend millions to train. Right now the training is less costly, but this combination of the right compute and the right model, that's what worked.
Daniel Serfaty: Thank you. Now that we took a peek under the hood, let's zoom out a little bit for our audience and focus the rest of the discussion on the fields of education and training. I would like each one of you to think about one particular application that you think is already making a difference, and why do you think it's making a difference, whether it support the teacher, the students, the learner, the instructional system designer, or any other actor in the landscape of education and training? Yair, you want to-
Yair Shapira: Sure. One of the first things that I learned in the education space is that education systems do not like revolutions. They like impactful evolutions. I think that despite the excitement from AI it still holds true. They like it to be an evolution other than revolution. I have a list of, it's public in the internet, I think 160 K through 12 AI companies that are well funded. It's not because they don't have money. I'm subscribed to probably every social media teacher's group that deals with the topic, but what you eventually see is that there is probably only one application that is commonly used by teachers and students, and that's ChatGPT as is, not that advanced applications on top of it.
It's fun, it's smart, it saves time. I think it's a great application. The question is what happens and what will happen with the other 160 applications? Those of you who have been active in the years or in the 1990s probably remember that when the World Wide Web became wide, you had Yellow Pages for websites.
Do you remember that? You could look for museums and you had 10 URLs for museums. It was an adapter between the world of telephony or you had the Yellow Pages to the openness of internet. Most of the applications that we see are these adapters. They are GPT wrappers. They are teacher copilots. They are stepping stone, like a temporary solution for future more profound AI worlds. They gained some momentum in the market, but you don't see common adoption of these solutions. There are also hundreds of them.
I think the opportunity is by far larger. I think that all of these are temporary solution. I think if you bring such a powerful technology, you need to seize the opportunity to move the anchor point and to make a really positive leap using technology. It's not enough to make an incrementally to make life easier, maybe enough, again, as a temporary solution. So instead of asking what do teachers do with GPT and let's automate it, which is what most companies do, I suggest asking a different question. I suggest asking, what do education systems strongly need and want that they could not do so far and cannot do with GPT? Maybe, just maybe with generative AI if specifically tailored to the application they will be able to do.
Daniel Serfaty: Give me one example of that. One example that you believe has already shown that it can make a difference or that you think has a potential to do so.
Yair Shapira: I'm a bit objective, right?
Daniel Serfaty: I don't want you to be objective. I want to give me an example from your practical experience.
Yair Shapira: This is the question that we exactly asked when we started NovoDia. We asked, "What is the number one problem with content in the K through 12 space?" The answer was that the content that the district brings and pays for doesn't reach the students, and especially, it doesn't reach the students that needed the most. Generative AI has the capability, we believe so. When we show [inaudible 00:32:32] to harmonize between the reward that is required at the district level and the flexibility to personalize it to each and every student at the classroom, that's where generative AI can move the anchor point.
Daniel Serfaty: That's an interesting angle through using the power of generative AI to solve that particular need. Andy, Svitlana, that's a question I would like to ask each one of you to pick one particular project for our audience and say why do you believe it's already working or it has a potential to work in an impactful fashion.
Andy Van Schaack: Well, there's so many things that Yair has said that I wanted to respond to, but I want to answer your question directly. Then if we have a little bit of time later, I'll chip in on some of the comments that he made that I think were really interesting. So one application I'm working on, the code name is AQUAGEN, which stands for Automatic Question and Answer Generation, so aquagen.ai. The idea is this. If you look at military training, I do quite a bit of consulting with the United States Navy. If you look at how much effort and time, money, in particular, is spent on developing assessments at the end of the unit, at the end of the sequence, and how they have to fly in subject matter experts who have no domain knowledge about psychometrics development, test development, all the rest, they spend about 10% of any development program on developing assessments.
So, of course, you can use ChatGPT if you're very clever and use a mixture of experts approach and a series of filtering to create really high-quality assessments. The reason why I think that's important is, one, it saves a ton of time and money to create high-quality tests, but it's also recognizing that the most effective way of enhancing long-term retention and transfer is through self-testing. Quizzing yourself. But nobody, like me as a professor at a university, wants to hand out a bunch of multiple-choice questions for my exams.
Because they take so much time for me to put together. I hate the idea of my test getting out. That's just been my nightmare. That's the way the military looks at it. But the most common method of studying among students, and unfortunately, the least effective, is reading some book, highlighting it, and rereading it again. That's what most people do when they study. They read, highlight, and reread. But the problem is they don't engage in retrieval practice. They're never drawing something from memory, so they're never firing up their neural networks.
If we can create really high-quality assessments quickly and easily, then we can not just use them as tools for assessment, but actually as instructional strategies. Give your students quizzes, let them take the quiz. Let them find out what the boundaries of their knowledge are. Of course, you could have some built-in remediation. Daniel, you talked about adaptive instruction, personalized instruction. This is the way to do it. My partner and I are creating an app where you just drop something onto it, like it could be the NATOPS, like the owner's manual of an F-18. Boom. It produces 100 multiple-choice questions that are really super high quality, very traceable back to the source content.
Then people could use those to evaluate what their knowledge is. But then they use it as a tool to promote retrieval practice spaced over time with some elaboration and remediation. I think it doesn't sound super sexy, but the important innovations in medicine, for instance, is things like aspirin, penicillin, hand washing, the killer app on the computer and the internet still email. So, Yair, I agree with you that maybe the killer app right now is ChatGPT. I think we want it to be something earth-shattering, but sometimes it's these simple things implemented well that allow us to really capitalize on amplifying human capability.
Daniel Serfaty: Both of your examples are predicated about the deep knowledge of the domain. That's what I like in a sense that in your example, the assessment, we know how big a problem it is, but now you are using assessment in a dual way to assess, but also to instruct. I think that is the creativity that eventually will move us away from just efficient or efficiency as a measure, but effectiveness as a way to teach and to learn better. We'll talk about those measures of merit in a second. Svitlana, you have an example you want to share with us?
Svitlana Volkova: Absolutely. Similarly, one application that Aptima did for the Navy recently was NAUTICAL, which leveraged large language models to revolutionize the instructional systems design process. It was a clever combination of prompt engineering strategies and the robust data model and a user-friendly interface that enables this seamless interactions between instructional designers to rapidly generate high-quality training materials and including task analysis and learning objectives and assessment items.
The impact of that is substantial. It saves a lot of time, like Andy mentioned. It's solving this really big pain point and saves a lot of time and resources. But I would like to add another example. I think earlier Andy mentioned the productivity boost that he's personally getting the same, the productivity boost the researchers are getting the AI developers is huge. There have been already quantitative studies, for example, recently published at science that calculate that productivity boost, how much time we save.
So another example I wanted to mention that we are working on is to leverage large language models and not only language models, but also multimodal pre-trained foundation models to improve cross-disciplinary collaboration, which is critical for innovation. By enhancing the way that we learn because researchers learn every day too. We get new information, we process it, we incorporate and innovate. In order for us to learn too, we leverage every day these technologies to boost our productivity.
Daniel Serfaty: It's interesting. Here you go. You talk about productivity as a measure. How do we measure whether we develop something that go to market real fast, like in the case of Yair's company, or something that stays at the research level, something that is both research and implemented in the classroom on a daily basis? How do we measure the impact whether or not that makes a difference, but that makes a positive difference in the learning space namely education and training? Are we saving time? Namely, are we being more efficient with the resources we have? Are we being more effective? Are we reaching higher levels of learning? At the district level, perhaps so at the enterprise level Navy or some school district, are we increasing productivity in an economic sense? Talk to me a little bit about how would you measure that you are moving the needle with a clever introduction of generative AI in the educational and training system. I would like an answer from all of you. Andy.
Andy Van Schaack: Yeah. I think this is probably the second most important question. I think the earlier one about what's the human component in terms of using AI and education. I answered by saying you need to start with psychology first and then use technology to implement those psychological principles with fidelity. The second one is really about how do we know if it works, right? You can't improve what you can't measure. One of the things I like to say, as a consultant, is or ask what's the difference between change and progress. Well, progress requires measurement in comparison. So anyone who's out there selling some instructional tool, some technology to help, the question I always ask is, so does it work? How would you know? I hate to say it, but we need to go back to first principles, which is if you're looking at a causal relationship cause and effect, a randomized pretest post-test control group experiment is just the way to go.
We assign students to two different conditions. One uses the old-fashioned method, the other uses the new-fangled approach. We assess them at the beginning to make sure there's equivalence between the two groups. We implement our different approaches and then we measure certain outcomes. I think, Daniel, you're asking, so what are you measuring? Well, certainly effectiveness. Do they learn the material, right? Did they acquire it? Now if we're really smart, we'll do a delayed retention test to see if they actually retain the information.
If we're also clever, we'll look at transfer. We want to make sure that they don't just do well with the problems that they learned on the problem sets, but also ones that are adjacent. We're looking at rates of retention, we're looking at the transfer of learning, and then I think we can look at specifics of what we're trying to get afterward.
So that could be measures of creativity, measures of toughness. I did a project with the Navy on can we make sailors more tough? But I think the great news is we have all of those tools today. These are the standard methods that we use in experimental psychology to determine how effective these things are. I think, unfortunately, not a lot of people do this. When I go to military conferences and I walk around and take a look at all the booths, Daniel, that's how I first met you and spent so much time hanging out at Aptima with Svitlana and your team. You were one of the few companies that I saw that were actually doing these measurements to determine do our systems work, and here's how we know.
That's what I evangelize with senior leaders in the military. When I'm talking to the two stars and I say, "You're getting pitched all the time by companies that claim to have some fantastic technology, ask them how do they know if it works or not. They need to show you experimental results with participants or subjects in the study that are similar to the ones that they're working with."
Daniel Serfaty: The field is moving so fast with one innovations on a daily basis that we don't have time to ask those questions. Those are difficult questions to ask, but I have the feeling that people build and then they'll think about it later. We know what happened to some field that do a lot of that. I'll go back to that in a second. But I would love to hear also from Yair and Svitlana about this notion about in a practical or even in a fundamental scientific way, what are the measures of merit.
Yair Shapira: Definitely looking for causality and running longitudinal studies. That's the theoretical. At some point of time, you should be able to show that. Of course, in the EdTech startup world, we need to move faster and there is no time to wait for LOGITUDINAL data. So I want to offer an additional or an alternative point of view. I want to share with you one sentence that I recall from one school administrator that at least in my opinion, encompasses all the KPIs that schools really use.
She was asked about our product by her superintendent, and what she said was, and I'm quoting word to word, "They are having fun and they are learning." That's it. They, is adoption. They're actually using it. It's not a white elephant. They are having fun. That's engagement. They enjoy it. As long as they continue to enjoy it, they'll come back and they're learning, which means real progress, even if it's subjective from her point of view.
You may know that a typical district has 140 digital instructional content systems, and almost all of them are white elephants. So they may demonstrate amazing causality and longitudinal studies, but students and teachers do not use them. With that in mind, I think I would definitely go for they are having fun and they are learning.
Daniel Serfaty: It screams for the question is how do you know that they are learning? But-
Andy Van Schaack: That's what I was going to say. Yair, listen, this is fantastic and I love having these conversations, but I agree with Daniel. I appreciate that teacher's enthusiastic remark and as somebody who's sold into K-12, that's exactly what she want to hear. But the part of me that says, so how do we know that they're actually learning and learning how? So I think you can do a little bit of both. I think you can do very quick turn. It takes a day, it takes a week, it takes a month. This doesn't have to be, "I take 1.5-year, I get an NSF grant, and then I put together this complex project and at the end of two or three years, I have some test results that come back and they seem very disconnected from the classroom."
I think you can do quick turn, high-quality analytical studies to support the claim that principal just had. If I was working with your team, I would say, "I love that quote. I would lead with it." Then my next slide would be, and let's look at the data. I think you would agree with me, right? You want to have a little bit of both, like from the heart and from the mind.
Daniel Serfaty: I don't want you to agree all the time today, some healthy disagreement is welcome. But, Svitlana, as a chief scientist focusing on that, you probably are also thinking about measures of merit for the introduction of AI. Give us a little bit of your insight on that, please.
Svitlana Volkova: Absolutely. First of all, if we talk about the measures of merit for AI, in general, not only for education and training, I think it's fundamentally broken right now. We do have benchmarks, but I think we are not evaluating AI enough during deployment. I think that's what Yair and Andy just talked about during deployment, when the end users are actually using the system, how do we measure whether the system is effective?
I think the fundamental way to measure here would be to focus on the end users. Let's talk about the end users. First are the learners. The measurements would be the engagement, the comprehension, the skill mastery, which is very subjective. Even 10 years ago, I would call it unmeasurable things or things that are hard to measure. But I think over this last 10 years, we learned how to measure the subjective outcomes.
For the teacher, for example, the one that comes to mind immediately would be the time-saving. Teachers spent tons of time developing the material. The time saving would be one. For the curriculum designers, the metrics would be content quality, for example, alignment to standards, which we haven't even discussed, or the speed of iteration. This is also really important when you develop the curriculum. For institutions or for the leaders and decision-makers, the student achievements, they care about everyone. So the student achievements, the teacher satisfaction, the resources efficiency. Again, all of these things are very difficult to measure. Then you bring AI to the picture, it makes it even harder. But unless we approach this measurement from this end-user perspective, I think we won't have a complete picture.
Daniel Serfaty: I think so. I wonder actually if an interesting potential application will be to ask actually AI to suggest not only some measures of merit for it's on help in the educational system but also how to structure a study. It can be a scientific publishable study, but it can be a lightweight study without going into systematic experimental design that will actually produce a proof of effect basically rather than anything else.
The reason I'm a little bit obsessed about this question personally is because I'm old enough to remember the previous two ages of AI. Some people say we're now in the third age, like the Jurassic age of AI, but in the first two, which were primarily dominated by expert systems in the '80s and also natural language processing. But after that learning and neural networks, the claims have been so exaggerated and then there was no proof that it was actually impactful.
Eventually, we went into a winter, the AI winters. Because I think the claim and the proof were at variance with each other. Did expert system in medicine change medicine as they claimed they were when they came out? No. The question is maybe if they would have quantified how they would change it, would have a way to answer the question, yes or no. With all the excitement that this new third wave or third generation of AI is generating, no pun intended here, I wonder whether or not we should get ahead of the potential criticism and perhaps winter of sorts by showing that it changes society, it changes schools, it changes system, it changes learners in some meaningful and deep way.
Andy Van Schaack: I think there's a misconception that what's required to do a really high-quality study to provide strong evidence of cause-and-effect relationships is that you have to do a longitudinal study and it has to take six months or a year and a half or three years to do.
One of the things I do in my research methods course is I show my students that within 175-minute classroom period, we can come up with a research question, select the correct research design, the instruments, collect data, and then write it up as a structured abstract. The clock is ticking, and we start from a question at the beginning of the class to an actual one-page structured abstract at the end of the class. It's a high-quality study. If you have more than 75 minutes, you can probably do more than that, of course.
But I think that to provide strong evidence of cause-and-effect relationships, whether you're measuring did they learn and did they have fun. Yair, I'm going to use your examples. Were they more creative? You can do those things very quickly, very rigorously. I just want to make sure that your readers walk away thinking it doesn't have to be super laborious and super expensive to provide the evidence that would convince somebody like me that a particular educational intervention would likely be effective in my classroom or with the audience that I care about.
Daniel Serfaty: Thank you for saying that, Andy. I suspect that you're going to hear from some of our listeners in the audience about some guidance about how to conduct those studies because I think the difficulty is not so much in the methodology per se, as much as to do it in a naturalistic setting as opposed to a lab setting. I think that creates a methodological challenge that some teachers or some superintendent in the case of a school district or some others may not have the luxury to have.
But that's a very good point. Thank you for saying that. So that people don't think it's a six-month research project they have to do to check the effectiveness of each intervention. One promise of generative AI that struck me of all the different promises, and I think Svitlana mentioned that earlier in passing, I think as she was describing the insight she had, is a notion that because of the power of AI and its effectiveness, especially the generative AI, its efficiencies rather, we can address one of the greatest problem in the school district, which is individualization or personalization of instruction, taking care basically not just in the comfortable middle of the Gaussian curve, but also on the two extreme or very advanced students or students with some learning difficulties.
Tell me more about that because I think that if generative AI could help only address this issue, it'll be enough. Everything else will be cherry on the cake. How does generative AI understand and adapt to the various needs of the learners? What are the mechanisms that are in place to ensure that it provides that service? Who wants to start with that?
Svitlana Volkova: Yes, I briefly mentioned it. One way that is happening right now, it's learning from the human feedback to really adapt to the perceptions and actions and this learning of the human of the end user. This is really happening by learning from interactions what fundamentally changes the way that we started interacting with AI technologies differently. I think Yair mentioned it before, that this was the coding example that this line became blurry, the line between the AI developers and AI users.
So one way is to learn from the human feedback from the interactions that are very natural right now. Right now, these are natural language interactions, but I do believe that the complete transformational personalization is going to happen in the future. So imagine where you have a classroom, where you have many AI-driven agents that are observing the dynamics in the classroom, looking into you as a human being, computer vision systems can analyze what we feel, what are our emotions, and all of this in combination I think will be a comprehensive, I would call it the compound system.
There is a new term that is trying to take over generative AI space, compound AI systems where you have generative AI models, where you have AI-driven tools, and not only AI, this could be causal tools. Andy made a very good point about the cause and effect. So the systems of different agents, some of them will be generalist agents like LLMs that exist right now, and some will be more personalized specialist agents that can observe and take this large amount of data that can be recorded in the classroom on individual basis to understand how people are learning, how to improve their learning processes, and how to improve the outcomes of learning. Only in combination where we can hear and see and read and observe and optimize by learning from this data will really allow us to move a needle in the personalized learning space.
Daniel Serfaty: That's very interesting. So it's not just the current system, but it's also other sensory system that can have a more comprehensive model in a sense of the learner that is being targeted in this particular case. Any Mrs. Smith's third-grade teacher knows basically by walking around the classroom and looking at her students and understanding that Maria is not going at the pace that Jennifer is going and making adaptation on the spot. That will be a great promise to actually realize.
Svitlana Volkova: Exactly. GenAI is not there yet. Absolutely, not there yet.
Daniel Serfaty: Andy, you want to take that on?
Andy Van Schaack: Sure. I'll give you two examples. I think the first one is simple and it's somebody else's, but it reminds me of this quote that the future exists today is just not evenly distributed. I think if you want to know what the future of adaptive instruction looks like, take a look at Khan Academy and what they've done with Khanmigo. I think we're all familiar with Khan Academy, which is you give some kid who's learning algebra, access to YouTube, and they can watch Sal Khan walk through some homework assignment he recorded himself.
But that company had early access to GPT-4, and then said, "What would adaptive learning look like?" They've done a fantastic job. So I would encourage your readers or your listeners rather, to look up Khanmigo through Khan Academy, and they do a fantastic job of not just saying, "Here's what a work solution looks like, but let me walk you through it."
If kids are tempted to say, "Just give me the answer," it says, "Nope. Instead, let me ask you a question, try to estimate something, and if you don't know, we'll help you out." I think that's one example. The other one is the app that I'm working on called AQUAGEN, and it's this idea that if you want to develop long-term retention, then you need to do periodic retrieval of information. Well, what Svitlana learns or what Yair learns or Daniel, what you learn, and what's easy for you is different than for me.
So if you said, I want to do retrieval practice based over time adaptive to the learner, then there are a number of dependent measures of memory. Probability of recall, latency of recall, savings and relearning that you can be measuring covertly while the individual interacts with the system. Then based upon that updating your mathematical representations of their forgetting rate to schedule subsequent practice.
I think the good news about that is these algorithms are pretty well known. I have some patents on it going back to '99. But, Yair, what you said a little bit earlier that it took you one day to create a pretty impressive model, at least impressive enough to generate some capital and do the rest. What would've taken me and a team six months to develop is something I can monkey around with and create a really highly functioning adaptive learning system today using off-the-shelf tools.
I think this exists today in products like Khanmigo that you can take a look at on the internet. I know that it exists on my laptop because I'm working on them right now, and I think we're going to see increasingly more of these things come out.
Daniel Serfaty: You are arguing, Andy, I'm trying to paraphrase with apology, what you said is that, "Yes, Svitlana's vision of a fully adaptive sensory system is a good thing to shoot at." But today, even without this information, some of the signal that can come from reading a face or reading student behavior can actually achieve some adaptation that will lead to personalization of instruction.
Andy Van Schaack: When I'd go talk to schools and I talk to teachers and they're just looking at these technologies for the very first time, I think when people about old-school machine learning, there'd been lots of promises of AI. The technology of the future and the technology, which is always in the future, I think we can see immediate, like putting your men on base, let's hit a nice bunt and get on the first base.
There are plenty of things you can do today. Just watch your computer, open it up, and here's two or three prompts that are going to make you more effective and more efficient. Do some simple things to get up to speed. For me, you double your capabilities on an exponential rate. So everyone needs to at least play with it for an hour to come up to speed, then put in another 10 hours, and you'll double it, put in 100 hours, you'll double it, and then in 1000, and that's where I'm at right now.
I think that if you are looking for something that's just going to be the Holodeck from Star Trek and this adaptive instruction using virtual reality that taps into using neural link in your brain. That's going to be 10 years out. But there are things that people can do today with some pretty simple prompts that will save them a couple of hours. I think those are the real simple wins that people should be chasing after to get up to the speed of the technologies to help them to then go to the next level of creativity and productivity.
Daniel Serfaty: Yeah. Yair, you're all about today, and now as an entrepreneur is going into the market. Do you see already in some of the systems you have implemented in your company or even in the market that actually start to approach this notion of individualization of instruction and individualization of learning?
Yair Shapira: Yes. I think the terms of individualization and differentiation and personalization, which have been commonly used or even have been the only grain of education for a long while, but I think in reality, it narrowed down until now to dealing with the skill levels, what is often called adaptive learning. Even that was narrowed down only to the pace to how quickly do you flip pages in your virtual book. The whole term of personalization narrowed down to move fast-forward or not.
I think that learning theory shows that it's a multidimensional issue. The personalization is not only about pace, it's not even only about skill, right? It's about culture, it's about background, it's about relevance. It's about interest. It's about language, tone, styles, so many different dimensions that are all personal to the student, and every teacher knows that. Every teacher does not only know that they spend time doing so now. At first, we thought that they spend time on the edges of the bell curve, on the special students, the gifted students, the struggling students.
But I think that every teacher today knows that 70%, if not, 100% of the students are, especially in some sense, one may have short attention, one may be an immigrant, one may be all about TikTok, and that's the only thing that they care about. So they are special. When we ask them, "Why aren't you using the content that is provided and proven and evidence-based and approved and so on, why are you going to bring content from the web?" They said, "Well, we have 30 students. It's basically 30 different classes that we need to address. We need to engage with. We need to meet them where they're at. For that, we need different content."
When you ask yourself why is it that those systems that those educational system are geared towards the center of the bell curve, it's because creating content is difficult. It's expensive. If it's expensive, then you want to cover as much as you can with as little as you can effort. This is exactly what the generative AI leaves.
If you leave the essence of labor or the labor intensiveness in creating content, then you can theoretically, and we claim even practically, personalize content to each and every student or every group of students as if you had a publisher per student. Maintaining the rigor of the content that they have, the learning science behind it, and also the engagement, the passion, the relevance reaching every student.
Daniel Serfaty: I think this is a very interesting insight. Thank you. The notion that, yes, it's not the bell curve in a sense, it's not just about the level of scale, but also had other societal dimension, as you said, for example, or other preferences or other learning medium preferences. So even the middle of the curve, the skill curve in a sense, can be seen as a sum of individual cases. Which brings me to my next question. In a sense, we zoom out even more looking at generative AI technologies as somewhat of societal equalizer that basically solves or stop addressing the issue of privilege, of population that need more attention, et cetera, whether it's on the demographic dimension or on the skill dimension or any other dimension we choose. My question is that's a wonderful dream if indeed that's the case that we can, using technology, solve or address a societal problem.
My question is what policy changes if you have to look at the Navy, Andy, you mentioned the US Navy as an example, Svitlana, you work with very different institutions, Yair, in the school system. My question is that if we step out a second and we look at now we have this new fundamentally changing element entering the system at different levels, what policy do we need to change? What safeguards do we need to have regarding the use and abuse of the system or misuse and of equal access to the system?
Is there a policy that needs to be... If I am the superintendent or the head of a district or the head of a schoolhouse in the Navy or the head of a major training enterprise in the police, do I need to change policies? Do I need to worry about this notion of access?
Svitlana Volkova: Yeah, I think absolutely. First of all, I think this year has been very exciting, not only from the technology development perspective but also from the policy developing, AI policy development perspective. As the executive order on safe, secure, and trustworthy inaudible 01:04:21] by the president as well as the AI Act just put by the European Union, define at least some dimensions of the safeguards that we AI developers have been talking about for the last, I don't know, more than five years.
We shouldn't take these models as granted. They're not always right absolutely. They have tons of limitations. As AI developers, we knew this limitations. But since the systems some moving to more general population use these limitations, these strengths and weaknesses have to be properly communicated. I think, in general, for every AI technology development, the critical thinking as always as with everything, and AI literacy, AI literacy for the teachers as well as for the learners is a must.
But then specifically, AI policy for training and education. I mean, first just the general principles of AI development and deployment has to be accepted, but we are talking about students, we're talking about kids. Here, the proper evaluations have to be done when the students are using these AI technologies in the classrooms and the user studies have to be made in order to inform policies. I will stop here [inaudible 01:05:35] Andy and Yair has more to add here. But I'm very glad that we are moving. We are finally enacting the long-awaited necessary policies for AI technology development.
Daniel Serfaty: That's great. Thanks. Yes, Andy?
Andy Van Schaack: I think this is one of those cases where technology is evolving much, much more rapidly than leadership, and the audience can accept it and understand it. I think we have to go through the standard stages of denial, anger, bargaining, depression, and acceptance, that old trope about how to handle it. So I think it's just going to take people a while to first get blown away and to be amazed by it, and then to just be shocked and be concerned. So when I go out and talk to K-12 schools, and even at Vanderbilt to name names, the discussion among teachers still and administrators is about how do we stop kids from cheating.
I just wish we could get beyond that. It's this idea of... In my classes, students aren't just allowed to use AI, they must use it. We're very, very far from that from most classrooms. Because it does require a teacher to change how they would typically teach. They've got to create new types of lesson plans and think about how they're going to assess people. Since I'm the one representing universities here, one thing that with culture that shapes policy is how do we assess professors at universities.
I think there is going to be correlates of this, by the way, in Corporate America and K-12 and other places. Right now, as you know, if you're a professor and you want to advance in your career, it's about publish or perish. It's about writing new papers. Well, imagine that you have a professor who last year wrote two or three papers, and then the first three months of this year, they write 10 papers, 10 high-quality papers. Well, clearly, that had help from somebody. The world of higher education that's viewed as plagiarism.
That's not your thinking. That's the thinking of this robot, and why would I give you a promotion and not have the robot get the promotion? So that's what the conversation is spinning around. They don't even want to touch that issue. It's radioactive. This idea that some professor might use this to do great science. What I find so remarkable, and I think there's an analogy with this example to other fields, is the way that we assess somebody's capabilities is we're using certain measurements. Those measurements break when AI enters the equation. So if professors can't use AI, then certainly, they're not going to allow their students to do it.
So we're hamstringing scientific research. We're hamstringing students' capabilities of doing high-quality work, and it's because of culture. I think that what's going to happen is it's not going to be about whether AI can do it or not. It's whether it's culturally acceptable or not. It's just going to take a while for human beings to wrap their heads around it. That's what's going to be so frustrating to people like Yair, people like Svitlana, Aptima, Andy developing AQUAGEN is we're just going to run into resistance. Not because the technology doesn't work and can't produce these benefits, but because people just can't accept it given the structure of the organization and the rewards.
Daniel Serfaty: Disruption is always painful. Great disruption is always greatly painful. Yair, at the level of the schools of the K-12, are you seeing a need for a changing policy regarding the use of AI by any of the actors there?
Yair Shapira: Well, they are seeing a need for a change in policy, which we believe to be correct, but I think they're not taking it in the right way. I think highly confused with many options that there are that exist with AI in education. They put them all in the same basket, which is not the right way to treat it.
It's like putting all electric vehicles in the same basket instead of saying this is an electric truck and this is an electric bus, and this is an electric motorcycle. If you take an AI inaudible 01:09:22], it's very different from an AI compliance system, which is very different from AI content, which is very different from an AI grading tool. Actually, an AI grading tool belongs to the category of grading tools, not to the category of AI.
So the application is the key that should direct what is the policy, what are the risks, what are the guidelines that should define the regulation rather than the fact that it uses AI. In our case, in NovoDia's case because we do want to make a wide impact, we want to somehow walk around the resistance, the AI components that we're using in the back, back end of our solution, eventually, the solution is a content library that is personalizable.
You can personalize content just like any other content library, there are layers of human beings between the AI and the student. The AI is a technology that assists the creators to create excellent content that can be personalized. That's the way we address this regulation requirement.
Daniel Serfaty: Thank you very much, Dr. Andy Van Schaack, Dr. Svitlana Volkova, and, Dr. Yair Shapira, from this enlightening session, I learned a lot. For my audience, thank you for joining me today. As always, the MINDWORKS team welcomes your comments and feedback as well as your suggestions for future topics and guests. You can email us at mindworkspodcast@gmail.com. We love hearing from you. MINDWORKS is a production of Aptima Incorporated. My executive producer is Miss. Debra McNeely and my audio editor is Miss. Chelsea Morrison. To learn more, please visit aptima.com/mindworks. Thank you.