MINDWORKS

AI and the Future of Work

Daniel Serfaty

How will artificial intelligence reshape the way we work? Which jobs will evolve, which will disappear, and how can we prepare for what comes next?
 
 In this episode of MINDWORKS, host Daniel Serfaty talks with Professor Joseph Fuller of Harvard Business School, co-director of the Managing the Future of Work project, about how AI is transforming jobs, skills, and organizations. Together, they explore what makes certain roles more vulnerable to automation, how human–AI collaboration can create new opportunities, and what leaders, policymakers, and workers can do to ensure AI amplifies human potential rather than replaces it.
 
 Listen for insights that cut through the hype and focus on what matters most: adapting to the future of work with purpose, equity, and resilience.

Daniel Serfaty: How will my job be affected by AI? Is my work immune to AI influence? Will I be replaced by AI? AI is remaking work all around us and the question it raises about jobs, equity and the human purpose in the workplace demands serious attention. Let's take a look at it together and go beyond the hype. This season on MINDWORKS, we're exploring how artificial intelligence is shaping the future of work. In our previous episode, we've examined how generative AI and related technologies such as agentic AI are shaping the technological terrain we now navigate. In today's episode of MINDWORKS, we go deeper. We examine how AI is impacting work across industries, what roles are most vulnerable and how organization and individuals will have to adapt to this revolution. I am Daniel Serfaty. Welcome to MINDWORKS.

I'm deeply honored today to be joined by Professor Joseph Fuller of the Harvard Business School, an authority on the future of work and an essential voice at the crossroads of labor, business strategy and AI. Professor Fuller is professor of management practice and co-director of Harvard's Managing the Future of Work Project. He co-leads the Harvard Projects on the Workforce and is a nonresident senior fellow at the American Enterprise Institute. His research covers employment polarization, skills gap, degree inflation, and more importantly for today, the effect of automation and AI on workforce outcomes. He's also the co-host of a well-known podcast, Managing the Future of Work, which I recommend strongly to MINDWORKS listeners.

Previous to academia, he co-founded the Monitor Group, now Monitor Deloitte, and led consulting work on the intersection of strategy and technology. Together today, we'll walk through Professor Fuller's intellectual journey, frame AI's opportunities and risks in the workplace, explore sector-specific impacts and conclude with some practical advice for leaders, policymakers, knowledge workers, and for you, our MINDWORKS audience. Let's dive in. Professor Fuller, Joe for today, welcome to MINDWORKS.

Prof. Joseph Fuller:  Daniel, thank you for inviting me and I'm delighted to be with you and your audience.

Daniel Serfaty: Wonderful. So let's start a little bit about, how did your interest in the future of work begin, whether particular moment on inside the project, that turned your focus in that direction?

Prof. Joseph Fuller:  Well, like a lot of life, it was a bit of thought and chance. When I left Monitor and joined the Harvard Business School faculty, the school had inaugurated several years earlier during the end of The Great Recession of study of the competitiveness of the US economy and I was invited to meet with the leadership of that project and to see if I wanted to get involved and there was an anomaly in the data, which is that the respondents to a survey we did of all our alumni, the first in the history of our school, showed that the respondents felt that the workforce in the United States had been a very important source of competitiveness for the country, but they also assessed that advantage as rapidly declining.

It was actually the most stark data that came out of that survey and I asked who was studying that, and essentially, the answer was, "Well, no one here feels they know anything about that." Perhaps foolishly, I said, "Well, why don't I start looking at that?" almost bringing up, "This is a consulting project in an area I don't know, so I'll learn about it and maybe can come up with some insight." It was tempered also by the fact that I knew that my consulting clients, several of whom continued to work with me after I left Monitor and came to Harvard Business School, were very concerned about this. So I had some confirming data, this wasn't mass hysteria or overstated and I began to look into it and here we are the better part of 13 years later.

Daniel Serfaty: We talked about the Managing the Future of Work Project at Harvard. This is how the initiative started or it started a little later, especially when did the mission evolve over time, especially in the last few years because of AI?

Prof. Joseph Fuller:  Yes, on both fronts, it did start a little later. Originally, it was a module in the broader project on US competitiveness, but it rapidly became more than half of the ongoing research within the project and our dean at that time, Dean Nitin Nohria, suggested that it should just become its own project. And very happily, my colleague, Professor Bill Kerr, who's a scholar of global talent flows and immigration and labor productivity, who was already a collaborator of mine in a teaching setting, agreed to become a co-head of the project. So we spun that project out of the competitiveness project.

Several years later, the president of Harvard at the time, President Larry Bacow, asked me to chair a university-wide faculty taskforce on workforce issues and that led me at the end of that project along with Professor David Deming of the Harvard Kennedy School and now the dean of Harvard College to find a second project that you mentioned in your introduction, the Project on Workforce, which is more narrowly focused on upward mobility, skills gaps, income polarization, whereas the Managing the Future of Work Project adopts an attitude of, "What are important questions that decision makers in industry, in organized labor, in the executive branches of government need to understand with data and in a way that they find approachable, so we don't create a lot of what would be classified as scholarly research for a peer-reviewed journal if we try to present playbooks and analyses that speak directly to the questions that those decision makers have on their minds?"

Daniel Serfaty: For me, that's particularly interesting, because as we start focusing on the AI impact on the workforce and the future of work in general, most people that I've talked to, both in my work but also in previous podcasts, start with the technology side. And they unpack that at different levels of what kind of LLMs and what kind of agents as opposed to looking at it from the human work side and understanding that it's really transforming workplace and perhaps even society. I think it's very refreshing that you tackle that from that angle first and then looking at AI as almost like an independent variable that comes into the workforce. Can you elaborate on that a little bit?

Prof. Joseph Fuller:  Well, first of all, I'm delighted that most people are approaching it that way, which means I don't have much head-to-head competition the way I'm thinking about it, but I think you described it nicely. Economists are very youth to looking at technologies and productivity data, but feels like organizational behavior are not really adrift in those types of analyses, certainly the analysis usually done by labor economists. And labor economists start with a phenomena and then explore it. They don't start necessarily with a problem and seek to interrogate it. And of course, much of the work done by the types of people you're talking about is absolutely brilliant, very, very difficult to do, very well documented. I read all of it, I benefit from the vast majority of it, but it's not actionable.

If I'm the editor of a peer-reviewed journal, it's actionable, but if I'm a secretary of labor and manpower in a state, if I'm a chief human resources officer in a company, if I'm a labor leader, if I'm an entrepreneur in the space or even a large company like the big workforce providers like Randstad, Adecco, Manpower, which all do their own research by the way and some of it quite excellent, I don't know what I'm supposed to do differently. Now, there are some clear lights there. Professor Brynjolfsson at Stanford, formerly at MIT, has done some absolutely seminal work. My co-head of the Project on Workforce professor David Deming at Harvard has done work that is right up on the other side of the line.

If you can stand the Greek letters and the equations and look at the findings, you'll see some important learnings for decision maker, but still not in context and often appears in publications that the vast, vast majority of decision makers are absolutely oblivious to. So we try to bridge that gap.

Daniel Serfaty: And I think that's a gap that needs more and more bridging. In your published work, I've read a few of your papers and certainly listened to a few of your lectures and I'm impressed by how you bring basically what is an academic institution and its power into very practical advice. One particular work as we dig now into more about the framing of AI and work, you've pointed to that learning curve between junior and senior workers as a key factor of how AI will reshape the job structures, that it affects a particular level in the pyramid, or however, we want to visualize that hierarchy. Can you explain that argument a little more, why it is central to the thesis of how jobs are being transformed by AI?

Prof. Joseph Fuller:  Well, we have several major workstreams going and let me start with the first you referred to which we called The Expertise Upheaval that I wrote with Matt Sigleman, the founder of the Burning Glass Institute, who's been my co-author many times and is a tremendously insightful source as is the institute on these issues, Mr. Mike Fenlon, formerly the CHRO at PwC, the professional services giant, but now happily an employee of Harvard University. We looked at how AI would affect entry-level jobs and tried to understand where AI would crimp the number of entry-level jobs because AI was significantly more productive at those tasks than an entry-level worker could be.

And those are what we call mastery jobs where a lot of the tasks in the early years of employment are routine cognitive tasks, where the employees being asked to present to apply rules or guidelines provided by the employer to make certain decisions. A really good example of that would be a credit analyst for a bank or for a commercial organization deciding whether or not to provide a buyer credit to buy my goods or services. The company will have developed rules by which you make those decisions about how big the account is, where they're located, how much they're buying, at what price, do they have a history with us or not, are we gaining market share through this or is this sustaining an existing account.

Well, generative AI loves rules-based decisions where there's lots of longitudinal data. Unless there's a error, it's very, very quickly going to have essentially no likelihood of hallucinating. So a first-year credit analyst economically will be dominated by the creation of a bot or an agentic AI to administer that process, but one sees, we call it mastery because over time that junior credit analyst begins to understand much more complicated transactions, to have the insight to change the rules by which decisions are made, to see, to spot an hallucination in a more complex transaction that may be three, four, five years into their career because mastery has been gained as a travel-down experience curve and you can't have a five-year-old unless you had a one-year-old.

So if it takes three to five years to gain that mastery, then if you do not have a supply chain of talent growing into those roles, what's the organization to do? Now, a lot of discussions about technology and the impact on entry-level jobs that stops there because the question has been a rhetorical one, "Isn't it true that this is going to destroy a lot of entry-level jobs?" but there's a doppelganger of this, which is jobs that will be made more accessible where more people would qualify to be considered to be hired into entry-level jobs because of AI, because the AI is automating a task that has been hard for people to master, that may require more difficult and demanding credentials or more experience.

So an AI might be able to very quickly do the basic framing of a website, but the creative content, understanding the context and the strategy of the website, not creating prose and text and photographic and graphic collateral, that's just all conventional unoriginal, uninteresting because it's gone and looked at competitive websites and just shot for the mean. So we call these jobs growth positions, and actually, growth positions outnumber mastery positions by about 40%. Growth mastery positions represent about 12% of jobs in the United States, growth positions about 17%.

Now, I want to be clear about a couple of things. While AI will allow more people to be plausibly considered in those growth positions, hiring is always a relative phenomena. It's not, "Is this person qualified, yes or no?" It's, "Do I think Daniel is more qualified than Joe?" And so we may still see a bias toward people with those types of backgrounds and credentials, but in a low growth labor force and also with the growing importance of social skills to success in work, we do think that more of these growth positions will get broader considerations, set a richer, more diverse pool of potential candidates and then employers will benefit from that as will the workers.

Daniel Serfaty: Thank you. That clarification just bring a plethora of questions. Does it mean that basically the acquisition of expertise to move from a growth position to a mastery position is going to be affected by the very phenomenon that we are mitigating? Basically, after three years of being in a growth position augmented by AI, the nature of expertise to get to the mastery position has changed, because now, your scale plus AI equals a different kind of scale or a different kind of competence at that level. So in a sense, the experts of tomorrow are going to be different than the experts of today.

Prof. Joseph Fuller:  Yes, I think that's very well said in a couple of observations. The first is that what is very, very difficult, arguably close to impossible to get sorted out at this stage is when does the curve starts shallowing, curve of improvement for the AI, because it's been improving faster than we predicted. And of course, what this is really a very big systems dynamics problem. Let's say you are my supervisor and my job is a mastery job, but I'm already a bit down the experience curve, so I'm able to keep my position but use AI to become more productive. Now, that may prevent the need to have another one-year-old.

It may allow us to reduce our current size of staff. It may allow you to expand my responsibilities, but at the same time, AI is affecting your job and the type of leverage you need from me. So you have this feedback loop created by AI. The second phenomena is that AI is unique among the history of technologies insomuch as it improves itself. We've never seen anything like that. So think of the cost line, a breakeven line for when a mastery job suddenly is better off being occupied by human being than the AI. That is likely moving up the Y-axis. And similarly in other areas, innovation AI will be directed through market signals to hard-to-do tasks which might create some more growth positions and we will just be juggling these two balls indefinitely.

Where it all settles, I wouldn't hazard to guess, but we're closer to the end of the beginning than we are to the end. 

Daniel Serfaty: I totally respond to this analogy of system dynamics because that's exactly what it is. You have double learning loops basically. A lot of folks are critics of AI in a sense. They say, "Well, it's just like every other revolution. We have automated pilot. We have this." I believe the big difference is the one that you mentioned. This is a technology that learns, and therefore, evolves and adapts like automated landing system in a cockpit. So in this particular case, that co-evolution or co-learning of the human and the AI together, especially as future systems of AI will have recall about previous interaction with that particular worker before we have a better mental model of the worker itself. That's why I like to talk about human-AI teaming.

But talking about this notion that you expended on, replacement versus augmentation of jobs basically, and that's where most of the anxiety in society reside, I believe, that people are asking whether their job can be fully replaced as opposed to transformed by AI and therefore in a smart way. In balance, which job or CASC domains rather, and you started to outline them, those that are rule-based as opposed to knowledge-based perhaps, are more vulnerable to reduction and perhaps even elimination? Could you speculate on that?

Prof. Joseph Fuller:  Yeah, so I think we can actually go a little beyond speculation. I think we're beginning to know certainly what we'd call routine cognitive work. Cognitive speaks for itself. Routine just means there's a limited amount of variability to the work I do cognitively. So let's go back to a credit analyst or let's say an accounts payable clerk. I can give the AI the rules that we want to preside over with accounts payable, paying our suppliers when they present a bill, "Is there a claim against the supplier? Do we have contractual obligations to the supplier? Are we trying to preserve cash in a certain geographic area or division or the corporation?" And the AI knows what the new rule is and exactly how to apply it instantaneously.

I don't have to send emails to someone and make sure they read it carefully and didn't misunderstand it or disagree with it even because they have a favorite supplier and they want to get them paid fast, even though under the rule, it might be delayed and then say, "Oh, I hadn't had time to process email." I'm not trying to suggest any malfeasance here. So this generation of AI is very good at those types of routine tasks, which by the way are often the tasks that workers in these jobs describe as the most onerous for them, the least interesting, the ones they sigh when they have to think about it.

"I'm going to take the sales report and the inventory report and write the monthly update for the controller of the division," where you can do a bot to let you write an excellent draft of that. And the bot would probably take an hour to build, and if you're spending three hours a month on that, it will give you a very good draft by it. Having access to the huge databases and your last 20 reports uploaded, you will want to edit it to make the writing flow better and to look for hallucinations. But that discouraging task that you dread every four Thursday of the month goes from a half day to an hour. So it will do a lot of the more burdensome, boring work.

It's going to get very good at multimodal integration. Gemini with multimodal, your audience should understand not only does that mean that it can create in multiple modes, it means now that video and audio is being tokenized and can be part of the context window. And so when you have tokenized video data and alphanumeric and you have context windows already in Gemini that accommodate 1 million tokens, it's a huge unlock into additional learning. Imagine the entire content of YouTube now being in the training set. So that's going to extend it much further.

So rules-based people tend to think financial rules, but we're seeing things like customer analysis where the AI can look at customer behaviors, all the data, create synthetic twins of customers or channels of non-customers and start testing whether or not changes in offer elements will improve market share or gross margin realization or customer satisfaction. It's less good at both in companies and in functions where the data is less clean or accessible. While it's excellent at legal work, it hasn't been very impactful yet in advanced manufacturing often because the physical layout and the way the data is gathered in multiple advanced manufacturing facilities within a single company are often different.

So there isn't as much rich data. It isn't as integratable, and of course, if you're talking about manufacturing, you've got the ISO 9000 mindset of, "I don't want to use a technology that's going to cause a lot of scrap or might even cause the process to get disrupted in a way that damages the capital equipment or something like that." So more caution there. So lots of green shoots of spring, but it tends to be concentrated in a few functions and a few industries right now.

Daniel Serfaty: We'll explore those industries in a few minutes, but you said a few things that I want to dig a little deeper. You mentioned this notion of the context window as opposed to the prompt. Most people, at least with LLMs and with the current way people understand AI at least, see themselves with a dialogue that has a little prompt window. You may enter a PDF or you may enter a sentence, a question or anything you want and then you get the answer. I like very much your generalization into, not just providing a stimulus, an input, we're also providing the context of that input. Can you expand a little bit on that? Let's [inaudible 00:24:57] into that, at least for many of people in our audience.

Prof. Joseph Fuller:  So all the models have had what's called the context window, which is, "How many tokens," token being a bit of an image or a bit of a word, "that it remembers in the session?" When AI began to move to the now famous transformer approach where the prompt was not causing the AI to look for the next morphological word based on the previous word, but on the entirety of the context, it could remember, improvement was jaw-droppingly great. Currently, Gemini, the Google AI, has a million token context window. That's a lot of content and so it's getting ever more aware of what you are consistently interested in. And context windows will keep growing as compute becomes available and so we will get better and better, faster and faster resolution.

Now, you mentioned prompts, and for several years, people were focusing on the task of how to write good prompts and there was an emerging profession called prompt engineer. People really understood the models, how they worked, usually add a machine learning background, neural networking background, so they understood those principles. Now, I think that job is giving way to something that may be confusingly, I'm going to call a context engineer. So we're no longer talking about the context window, we're talking about the task. If you're using transformers and have a very rich context window, it's that much more important for the person interacting with the AI to really have domain knowledge.

Because context window, being richer and the process and getting deeper, means the ability for the AI to both generate an important insight that might only be recognized by someone who really knows the field or to create a very compelling story that's just wrong in its base, but that a generalist that doesn't have that rich understanding of the actual dynamics of the phenomena that you're probing may be fooled into accepting. I've always thought hallucination was a misnomer. It doesn't make up stuff that isn't there. It confabulates, it makes up a perfectly logical story based on what it's been trained on, which just happens to be fundamentally flawed. 

Daniel Serfaty: But in a sense, it's very comforting, that last start, because you talk about the upheaval of expertise or some people are talking about the death of expertise, which is horrifying and we don't agree, but this notion of it's comforting because you are making in a sense of the implication of your argument is that, as the worker doing cognitive work interact with the AI, the expertise of that worker has a direct implication on the quality of the total work of the human and the AI working together because that worker will know how to put that context better and be able to evaluate what comes out of the AI better. Therefore, a new kind of expertise in a sense is needed, a context expertise. That's right.

Prof. Joseph Fuller:  Yeah, exactly. I think that's a very nice description and the context would be everything from how this market has worked historically, how AI is playing into this market, how we in this enterprise do business, what we stand for. The AI may make a very rational decision on granting credit to a customer saying their financial statements indicate that a rule is being violated, but the context engineer may know that, "That customer," "That supplier," rather, "is working on a critical technology for our new product release. We have a 50-year history with them and that we have an archrival who's been trying to get their attention and get more of their engineering talent."

So the context engineer drawing on multiple ... It could just be an expression of corporate values, "There's a longstanding supplier that we value. They're not enjoying a healthy period. We're going to stand by them. That's what we stand for as a company." But I think also, in complex phenomenon, it often requires an expert to actually see something that's genuinely original, that's jumping out. And I think that's another thing that we don't have the term for. Maybe your audience could suggest one that we could all vote on it and I could pick one, but we're soon going to have a new type of error or problem or challenge in AI that it's suggesting something that is genuinely insightful, but the insight is so subtle and advanced that only a true master could detect it.

And this is, of course, detectable in the application of AI to the game of Go where AI won the tournament three consecutive games and in one game, my memory with move 232, but I may have that number wrong. It did something that every expert said was daft, that makes no sense. And then 100-move later, it all came together and it was a brilliant non-linearity. So, that's a encouraging sign and a warning sign that if you can integrate every piece of human knowledge on a topic, you're going to come up with all sorts of things that people haven't thought of before.

Daniel Serfaty: That's such a hopeful perspective, because I think that's very insightful, that notion of recognizing insight or recognizing a unique perspective as opposed to just attributing it to something, "Oh, it's weird," or "He didn't understand," because you are not in the middle of the curve. How do you recognize the three-sigma insight?

Prof. Joseph Fuller:  Especially in a rules-based organization?

Daniel Serfaty: Yes. Exactly. That already points to particular sets of perhaps even new skills or new competence that one needs to know how to train in a particular industry or particular organization. I have one last question. I want to get a little deeper into particular market, in healthcare and others. But what do you think are the biggest challenge? Again, we're thinking about we're envisioning this world when human AI collaborates and they have evolved, co-evolved perhaps.

In terms of designing the workplace, what are some of the biggest challenges, not necessarily the physical workplace, but we talked a little bit about that in terms of creating an environment when humans and AI complement each other as opposed to compete or even trying to eliminate each other?

Prof. Joseph Fuller:  Well, I think there are several. I am finishing now a very extensive project with my friends and collaborators at the research branch of Accenture, the high-tech giant, appropriately named Accenture Research, where we're looking at how the application of AI inside entire organization structures will lead to those structures changing. Now, we can only begin to add on a sense for that systems dynamics effect that we talked about earlier.

But even without the ability to forecast that accurately, several implications are pretty clear. In a number of industries, you're going to have a need for fewer entry level and particularly what are called individual contributors. So, you're through the entry level job, but now a member of technical staff, for example, in the software company, where you'll have a very significant improvement in productivity, which changes the ratio of those people to their supervisors.

That suggests that in the future, organizations are going to have a much, much greater focus on retention. Why? First, because we need people that have the judgment who can either see the three-sigma insight or detect the well-formed confabulation, but also because if I have fewer entry level workers, filling those roles with people who have that insight will be harder. And if I tolerate a high turnover rate in middle level, let's say senior software engineers, or I compete with some large companies that hasn't understood this until it's too late, and therefore, reduced to just throwing money at people, not at the extent that Mark Zuckerberg or Elon Musk are throwing it at AI geniuses. But I'm late to this party and now I must steal the other's talent, I'm going to pay a big price.

Let me add another thought, which is the rate of technological advancement is at this point, kind of asymptotically approaching the time it takes to master a technology. By that I mean the half-life of a typical technology of a version of a technology. So, I'm not talking about generative AI versus machine learning. I'm talking about, of course, machine learning is integral to generative AI. But going from ChatGPT 3.5 to 4, to 4.0 to five, the half-life of those quite different models that have to be handled differently is approaching how long it takes to learn that. So, we've never been anywhere close to that.

Historically, what we've done is evaluate candidates for jobs, especially white-collar jobs based on their credentials, mostly their academic credentials, field of study, performance, selectivity of institution, maybe some work experience either as a student or a first job. And once we've hired those people, we've let them learn how to be productive in our companies basically through on-the-job learning. In fact, corporate formal learning budgets have been being reduced for quite some time in the United States.

Well, neither of those two things work anymore. I have to do much more learning internally to get people to keep up with the rate of change. I can't let them rely solely on learning on the job or I won't be getting the most out of the technology until I'm all moved at the next technology and it's hopeless for the education establishment to keep up with this type of velocity. These institutions are not built for speed.

Daniel Serfaty: Not designed for that yet.

Prof. Joseph Fuller:  Not at all. I tease my colleagues here that certainly at Harvard College, our undergraduate institution that Oxford Don from the 15th century would basically understand the structure of Harvard College and its governance. He would be speaking a very confusing English to us and be astonished by things like smartphones. But if you showed him an organization chart, he'd know what you're talking about.

So, we're going to have to have much greater work-based learning, apprenticeships, co-op programs from university. So, employers can help evaluate the young workers, a rent-to-own model, but also give them experience that's going to give them a background, allowing them to be sufficiently productive fast enough that every job doesn't turn into a mastery job.

Daniel Serfaty: That's fascinating. In a sense, one of the implications of what you're saying is that if we're looking at the nominal 40 hours week job, because of that need to learn, learn and do, learn and do loop, the much larger proportion of those 40 hours will have to be built-in and designed for learning on the job and not expecting necessarily an immediate productivity out of it, but the future productivity will be a multiple of what it is. That's a pretty drastic implication, as you say, in the way we hire and how we hire folks that will be able to sustain and benefit from that kind of learning.

Prof. Joseph Fuller:  Yes, indeed. And we'll be looking for people with a number of attributes. One is intellectual curiosity and a demonstrated capacity to learn. And as I mentioned earlier, we're looking for people who will have higher order, what are called social skills, the ability to work with other humans. That goes well beyond EQ, by the way, to communication skills, negotiation skills, comfort with dealing with strangers.

By the way, women are increasingly dominating higher education in the developed world. 58% of all current college enrollees in the United States are females. And women generally outperform men on social skills by between 25% and 33%. So, high social skills people with a proven capacity to learn that mix is going to be increasingly women. And when I say this to some, let's say, reunion classes here at Harvard Business School, I'll make a estimate of the number of women in the room, the number of men in the room, and I'll say it's 40% women, 60% men. I will say, I'm about to say something that 60% of you will find unbelievable, and 40% of you think is so startlingly obvious that I ought to be embarrassed as a Harvard professor for mentioning it to an audience.

Now, there's another factor at work here, which is every single one of these dilemmas is a market signal to innovators and to entrepreneurs come up with an AI solution for this. So, AI coaching applications now are within single digit points of the effectiveness of human coaching in multiple dimensions. Why? Because human beings tend to make errors of the same type in a recurring fashion. We all have our personality footprints and they come to the fore, and our quick and twitch reactions are going to be similar transaction-to-transaction. So, it's easy for the AI to get a sense of that.

So, I think there will be AI bots that understand Daniel's learning modality versus Joe's. There'll be 8 or 10 archetypal learning modalities for people in a consulting firm with this background, with this job description, with this academic background, and it will be able to detect where you're likely not to grasp what it does or you're likely to make an error, be on the lookout for it, capture data, maybe prompt your supervisor, maybe prompt you, and we'll see the same thing, I hope, in certainly grade 4 through grade 12, if not grade 2 through grade 12 in the K through 12 system, where not only can I help young Daniel and young Joseph overcome things that are hard for them, don't come naturally.

But also, I can tell their instructor, "Daniel needs help on this. Joseph needs help on that." Or your entire class, we've taught something last week, but none of them actually understand it. You better review that, because you're building on a foundation of sand on that core concept.

Daniel Serfaty: I think that last point is very important, because I see that in both K through 12 education. But also, in professional training that one of the big benefits of those AI agents or bots is the individualization of instruction and the individualization of feedback just because a single teacher or single trainer cannot take care of 20 people all over the learning curve and specialized AI bots could.

Hello, MINDWORKS fans, this is Daniel Serfaty here. Do you love our podcast, but are you short on time? Check out our MINDWORKS Minis. We've helped pick the best moments from our full episodes and pack them into bite-sized segments of under 15 minutes each. Minis are perfect for your busy schedule. Catch MINDWORKS Minis and full episodes on Apple, Spotify, Audible, or wherever you get your podcasts. Tune in and stay inspired even on the go.

So, to date, in what industry do you see the biggest impact of the introduction of AI? You talk about financial services, accounting, business processes a little earlier. But if we look at large industries like healthcare, manufacturing, defense, education, where do you see, so far, the biggest impact? And then if you could speculate also with the next changes.

Prof. Joseph Fuller:  The biggest impact, so far, has been concentrated in financial services, technology companies, particularly those with software platforms, professional services, and then certain functions like consumer marketing. We're just scratching the surface though on several different areas. I think you can see now a very distinct trend in manufacturing operations for the application of AI tools in a broader function, which I'm going to describe as decision intelligence where AI is supporting a human decision-maker in making complex decisions, often called a skill.

And you see companies like Aera Technologies where full disclosure, I'm on the board of directors, Palantir, that are really making a lot of progress there. And what they're doing, Daniel, which is very exciting and I think your listeners with a lot of experience in large company settings will immediately appreciate, which is AI doesn't connect very well to the enterprise resource management systems.

All those companies, the Oracle, the SAP, the Workday, the Salesforces are working hard to develop agentic capabilities that draw on their data. But we've never tackled the problem, which in the old days was called the master data problem, which is the accurate, timely, state-of-the-art integrated data set. And ERP, ERM systems were going to create those. They became very unwieldy, immensely expensive to upgrade and time-consuming upgrade.

What a solution like Aera can do is interact with those systems to draw the data that they do house, relate those data to each other in the service of making a decision. And that is really showing up in things like supply chain.

Daniel Serfaty: That's really a very hard problem. I can talk with my own experience even with Aptima, that smooth integration and maximum exploiting, exploiting the data that we have maximally to do better business is such a difficult problem and usually we use a lot of human glue to connect, basically, those different things. AI cracking that problem will probably affect many industries.

Prof. Joseph Fuller:  Well, the human glue gets factored into the decision intelligence, because the AI supports, generates recommendations to the human part of the glue and learns from the human's reaction to the recommendation. So, it becomes both a decision engine, but also, it's safe to say it's also a long-term in-depth structured learning for the AI.

Another area which we can't quite build the data bridges for, but AI will probably ultimately build the bridges, is how do we build bridges between your medical tests, your genome and massive banks of data about the medical histories and current health status of millions and tens of millions and hundreds of millions and billions of other human beings and how I have both a grandson and a granddaughter, each one from one of my two married sons who are both about a year and a half old.

I imagine, certainly by the age of seven or eight for them, that their pediatrician will be able to draw blood and get an AI-driven, forward-looking diagnostic forecast for them based on LLMs of genomics and indicate to my granddaughter, Frances, that she's at very great risk for various types of melanoma and has to bathe herself in SPF 2,500 if she can find it. And that my grandson, Donovan, will be told that he has a propensity toward high cholesterol that his parents should start, if not turning him into a vegetarian, then cutting out red meats, eggs, whatever else. We then know, as we now know, the US government's famous food pyramid was incredibly astute and so much, it was basically wrong about everything.

So, outcomes like that, we already have terrific tools for AI transcribing the notes of a session, cataloging them, AI working to make the medical record systems, like Epic in the United States, much more efficient and effective. In a lot of industries right now, it's like a rugged part of the world with groups of lakes that were scoured out by glaciers. They're very deep, they're very large, and they're fully separate. We're going to be able to dig some canals.

Daniel Serfaty: That's a beautiful metaphor. I am wondering actually in two industries that we are familiar with, the one that you just referred to in healthcare, but also in other industries such as defense and security, where the decisions are made for some of them are very intimate, like healthcare, my doctor, my x-ray, my blood tests, as opposed to other more decisions that have geopolitical implication, but both of them are basically lives depend on the right decisions. Can you expand a little bit on the trust aspect?

I know that many people always go back to that question about trust in AI. But in those particular domains is a slow adoption of AI a function of people say, "If we don't trust it fully, we cannot actually execute on the recommendation or the decisions of a doctor." I mean, I was reading this morning in the paper there was an article about a shortage in primary care physicians, PCPs, and the moved by some HMOs, et cetera, to replace actually several function of the PCP with an AI. And I wonder the degree to which patients will trust that, even if it's just for triage purposes or for initial meeting or things like that, have you seen an evolution in that trust function?

Prof. Joseph Fuller:  Well, younger people trust technology more. I think what you'll end up with is particularly given the way EU regulation is structured. In a lot of jurisdictions, you'll see a final decision remaining with the physician or EU puts the highest level of AI risk on human resources decisions. So, the HR professional will have to validate the judgment of the AI.

I think it's also apparent that we're going to have AI that monitors other AI. And that that AI, the monitoring AI will be given very, very low heat settings that we call absolute rules with very little variability accepted, certainly not without flagging it to some human overseer. And that that will happen both at corporate level and ministerial level and whole of government level. We cannot rely, per our conversation earlier about the, as of yet, unnamed non-linear insight.

It's not reasonable to assume that a normal person, even a distinguished and experienced physician, will be able to match wits with these data sets. We do know already that in medical imagery, the AI is consistently better than the radiologist, particularly based on time of day and day of week, but we also know that it makes mistakes. Now, every time it makes a mistake, if the training regime is set up correctly, it learns from that mistake, but that it's still capable of making pretty big gaps.

So, it's going to be more a co-bot type approach and integrated. Imagine having everything from Grey's Anatomy to Wikipedia to an Encyclopedia Britannica to the entire contents of the Bodleian Library at Oxford as your assistant who through now a essentially unlimited context window understands you, your schedule, what you do for a living, how you've made judgments in the past and is being an immense productivity enhancer. Various theorists say that, ultimately, AI becomes a unique personal assistant for every human being that's got access to it. That's a very bracing and exciting vision. We have a long way to go.

Daniel Serfaty: How long for that notion of a personal individualized AI?

Prof. Joseph Fuller:  I think if you're a person of some means that is adept at interacting with this technology, which by definition is incredibly easy to interact with and getting easier every day. You don't have to learn SQL. I think we're getting elements of that now. I think that's the vision for Apple. I think that's the vision for Microsoft. I think that's the vision ... Well, actually for all of them. There is an incredible arms race. I think we'll see real tangible evidence of it certainly by 2030, but probably between now and 2030.

Daniel Serfaty: Wow. That's really soon. I am intrigued by your notion of an AI supervising other AIs and that supervisor AI of sorts overlooks and has strict rules of, you say little heat. What did you say, low heat settings?

Prof. Joseph Fuller:  Yeah. Well, you turn up the heat setting, it's allowed more creativity. Lower heat settings would say, "Basically, here's the rule. And if the rule is being violated, call Daniel's cell phone and call the emergency hotline and issue a stop order on that decision." And the other AI is taught to honor the actions of the first AI. I don't want to act as if that idea of AI supervising AI is mine. You can see papers on that in the 1980s and is certainly the strong position of Eric Schmidt, formerly of Google, that that's where we're going to end up and I don't understand what else would have the speed and processing power to get ahead of where we're going.

Daniel Serfaty: Yes. And I think if implemented correctly, maybe a more thoughtful alternative to having AI being over-regulated by some regulators in Washington or in the government somewhere. I think it's an intriguing thing that could be pursued, but that could be also implemented within each particular organization according to the rules and the ethics and the values of the organization, too. It doesn't have to be a universal enforcer, basically.

Prof. Joseph Fuller:  And we already have pretty intrusive regulation coming out of Brussels, which is unfortunately going to do a lot to ensure Sino-Anglophone dominance, with all due respect to Mistral, which is an economically designed approach. So, it's clever. I think you'll see plenty of focused AI successes in Europe that are really doing what amount to small language models that are narrow and deep. And a huge percentage of tasks are going to be processed by small language models that are accessible by designated large language models.

But by being small language models, they can be trained very precisely and are much less likely to suffer from hallucination, certainly than the original LLMs, when I could write a prompt like I have an interest and the guess on the next word would be in. But really, I'm interested in my mortgage in my next word is going to be rate. Well, if I have a lending SLM, it'll never guess in, it'll always elevate rate as the next guess. Now, as these things are all transformed, based that risk of go down a little bit by itself, especially with the context windows getting big.

Daniel Serfaty: Well, I have two more questions and both of them have to do with advice or insight. The first one is that if as a Harvard professor or as a senior consultant you are advising a CEO today, a senior manager, chief executive, about how to think about AI and workforce strategy over the next three to five years, what would be briefly your top three priorities that that particular CEO should focus on?

Prof. Joseph Fuller:  Well, my colleague here at Harvard Business School, Amy Edmondson, a very close collaborator of mine in teaching and a close personal friend has a concept called intelligent failures. Those are failures where you don't have huge amount of resources or risk on the line, but you're pursuing learning and you're open to what you learn is disconfirming of your hypothesis. I would say seek intelligent failure and tolerate it, celebrate it even.

Companies are being too cautious. They're running early experiments, getting what they view to be disappointing results and using that to justify reining in their efforts. I think that's a anti ... contrary to learning and very risky competitively. The second is democratize the AI, get in people's hands. Our research at the project on workforce is that people are 50% more likely to be using generative AI outside of work than inside of work.

Now, they're using free models. This is my high schooler science project AI, or I'd like to take a tour of northern Italy, help me build an itinerary AI. But people are not getting familiarized with it fast enough. The final thing I'd say, since I could do to a list of 10, Daniel, but I'll limit it to your requests. The final thing I'll say is stop focusing on generative AI as a way to improve your current processes. That is attaching a lightsaber to some of Marshal Ney's cavalry at Waterloo.

This is a fundamental technology. It's as important of electricity. You need to start picking key processes and creating a new process around the AI that is built to the specification of maximizing the impact of AI. Now, that will require you to maintain that process while you're engineering it and your existing process. So, it'll be more costly. But AI is so fundamental that if you just say, "I'm going to use it to make my current process more efficient," you will get more disappointing results and you'll leave yourself vulnerable to more ambitious existing competitors that are willing to fail intelligently and AI native companies, which are beginning to exist.

And that combination could be very dangerous to a large company that's saying, "Well, I'm trying to get some efficiency and I can't go so fast and I don't want to pay money for all these middle managers to have licenses that they're just going to use when they go home to plan their vacations to Italy."

Daniel Serfaty: These are golden pieces of advice for our audience. They're getting it for free. But these are very insightful. I resonate with every one of those three pieces of advice about what to do. Maybe as a way to conclude, let's call that worries and hopes. What keeps you up at night when you think about AI and work in terms of worry, and what is your big hope for the future of work? In that order if possible. So, we'll end on a positive note.

Prof. Joseph Fuller:  My worry is that ... well, threefold. One is that this will be used in a short-sighted way by a lot of companies as a efficiency device, not as a transformational growth engine, which it can be, that companies and workers will suffer as a function of bad actors who will mostly be traditionally bad actors, but could also be bad actors in their companies or their customers or their suppliers, or I suppose theoretically the regulators. And that will end up with an AI divide like we did have a digital divide both in the workplace and beyond the workplace.

In terms of hope, I think, and I am much more bullish and bearish, although I think my hopes will be realized more in a decade long type increment, so my concerns may play out in the interim. This can be an amazing productivity enhancer. We desperately need productivity enhancement because as our working age population stagnate or shrink, the only path to growth is having more productive workers to support children and seniors and people that need government support.

It will take a lot of the tedium out of work. It will unlock people's creativity in ways they didn't imagine. It will make barriers to things like entrepreneurship much lower. It will greatly help reduce the number of people who start their careers in areas which they're not going to enjoy or they're not really well suited to because it will solve the matching problem. The whole quality of work, life of people, their productivity, their happiness, their ability to contribute is going to go up tremendously, if only we can keep from this becoming the stuff of big power rivalries and let it benefit human beings.

I know teenagers sometime in the last 30 years, started using the word "awesome" all the time to describe things. This is genuinely awesome. This is genuinely awe-inspiring. So, I hope that sounds hopeful.

Daniel Serfaty: Thank you, Joe. Your comments are really awesome and inspiring here. Thank you so much. Thank you for joining me today on MINDWORKS. We always appreciate your comments and feedback and suggestions for future topics and guests. Email us at mindworkspodcasts@gmail.com. We love hearing from you.

MINDWORKS is proudly produced by Aptima Incorporated. Our executive producer is Ms. Debra McNeely, and Ms. Chelsea Morrissey is our assistant producer. Our special thanks to Bespoke podcast editing for excellent sound engineering. And don't forget, we welcome your thoughts and ideas. Thanks again for listening.