MINDWORKS

Mini: Will GenAI Enhance or Inhibit Human Innovation?

Daniel Serfaty Season 4

MINDWORKS host Daniel Serfaty talks with guest Gene Kesselman of MIT about the impact of GenAI on its human users' ability to innovate. 

Listen to the full episode, "Leadership, Entrepreneurship, and Innovation," with Chris Wolfel and Gene Kesselman, only on the MINDWORKS Podcast.

Daniel Serfaty: Welcome back to MINDWORKS, Gene, for this conversation about generative AI and innovation. Today we're going to explore how generative AI, that has been omnipresent in almost everything we do over the past year and a half, how generative AI actually influence the very process of innovation and entrepreneurship that you have described in the past hour, especially you have deep experience with that very process. You teach about it, you manage it at MIT. I would like to see your take on whether or not those GenAI platforms for the entrepreneurs, for the inventors, do they enhance or inhibit innovation according to what you have observed so far? If you have some examples to share with our audience, that would be wonderful.

Gene Kesselman: Thank you again for having me. It's a real pleasure to talk to you from this topic. I think, generally speaking, you can't put a full wrapper of a statement on this. I don't think I can say all of generative AI is helpful or all of it is hurtful, especially when it comes to something as maybe a little abstract as innovation.

But my experience has been, no, not just a fan, but a deep user of generative AI, as much as you can be a customer, is that really, if you think of innovation as this process that we have an idea and then you do something within and you take it to some kind of impact or market or mission, I think what generative AI, what it does best right now is on three axes. Those three axes are ... And this is my theory, my three axes, is efficiency, creativity, and problem solving. You can think about a value proposition across those three axes with an AI platform or AI product.

And so, to the extent that if you think innovation just generally is this thing where people, smart people or innovative people, hardworking people, whatever, just people, create this stuff from their ideas, all three of those axes play very deeply in that process. So you become more efficient, you can become more creative, or maybe creativity becomes easier, and it is extraordinarily good at problem solving, at least problem solving to the 80 or 90% solution.

So if you take just that layer, I think it is all benefit. It is just extraordinarily helpful. Of course, you have to then discount for all of the potential issues, the black box issues, the ethics of it, the hallucinations of it. And so, you do have to be very careful that you don't make it the panacea and just cut-and-paste solutions.

But I think, generally speaking, my experience with it and watching other people use it is that if you think of it on those three axes, then you can really boil down the value proposition of generative AI to innovation and thinking how does it help on all of those axes?

Daniel Serfaty: Yeah, thank you. I love your categorization of those three axes because I certainly share that. I can see that, how it influence even the work with my colleagues, the efficiency, creativity, problem solving. I'm going to ask you in a second to perhaps give an example, either of one of them or all three of them, in which you've seen that basically those three essential process of innovation and also, frankly, entrepreneurial innovation, all three axes affect positively the process. But for our audience, you talk about three concerns. Let's start with the concerns and then go back to the examples.

Gene Kesselman: Sure.

Daniel Serfaty: You talked about the black box effect, the ethics effect, the hallucination. Can you expand a little bit on that?

Gene Kesselman: Very high level, very simply, these models are, for the most part, black boxes, even though open source models, where you cannot necessarily recreate or reverse-engineer an answer to some extent, and you don't actually know after a certain amount of training what's going on within the black box of algorithms or processing of the model.

So in most cases, you can't actually say exactly why it resulted in the answer it resulted in. You can give general ideas, you can give general assumptions based upon the training that was done for the model and the training dataset, but you cannot, like I said, reverse engineer exactly how it created that solution. So that's the black box problem.

The ethics problem obviously is because these models are trained on now trillions of tokens. Humans are flawed. We have lots of biases. Those biases are represented in our language and in our writing and in our content. Then those biases then get passed along to the models that are trained on that. So you have to always be careful about the training datasets and the biases that are inherent in that training data.

The hallucination is that, again, these are not perfect models. They're not 100% accurate. In fact, they are not accurate per se. As most people now know, these models predict language and words. And so, they're not actually creating accurate representations for the most part. They're actually just guessing what words come after other words. And so, sometimes what they write isn't false or made up, or hallucination is what they're called. And so, you just have to make sure that you are always doing your diligence on whatever's created to make sure that it's not just making up statements and data and things like that. So those are the three primary, I think ... There's others, but primary concerns about generative AI.

Daniel Serfaty: Thank you for your clarification. I think as much as our audience is familiar and probably using that in their day-to-day, whether they write a speech or they solve a problem in their work, I'm glad you clarified those things, because they are not articulated as clearly as you just did in the open literature.

So let's go back to some examples that you could share with us. When you observe ... In your role at MIT, you are in charge of the studio, the venture studio, but also you are watching some companies being created at the intersection of government work and commercial work. Can you think of an example, without revealing things that should remain private, you've observed either that the efficiency or creativity or even problem solving was enhanced as a result of the inventor or the entrepreneur or the team were able to interact with a generative AI application such as GPT or others?

Gene Kesselman: I have a couple ideas, and the focus here is going to be obviously not on AI as the product, because it's impossible to work with companies now and not run into many, many that are integrating AI into a solution. So we're not talking about that. We're talking about how a company, a startup is using AI as a tool.

I think one of the most stark, best examples I can give you is something probably close to your heart and your experiences. There's a process for applying for government grants and contracts.

So, historically, as you know, the way a small business and startup would begin to access government contracts is through the SBIR, STTR, or SBIR or STTR process, and it was always very onerous for groups that did not have a capture team, a bunch of people that are really experienced with writing government grants, to use the right words, use the right ideas, concepts, understand the customer. You have to have people whose jobs were to do this for a living, to capture government contracts.

The government has taken steps to make all of that easier directly for small businesses and startups to access it through open topic SBIRs and STTRs, but it's still pages and pages of reading and pages and pages of documents and writing. Well, as much as it's probably a bad sign for the SBIR consulting industry where people help you win those contracts, it is now a insignificant and highly efficient effort to use these large language models to basically write your products for you.

I am not just saying I'm the president, I'm a user of this idea. I'm very much using this every day where we are also looking at the things that the government's putting out, and these are ... Even for a guy that's been in for 24 years, going through these documents can be such a laborious task. In five seconds, I can get a very, very good, concise summary of a 40-page PDF that the government puts out for RFP. I can create a standing GPT or standing prompt that basically will write the grant response for me in about 30 seconds. It's an 80% solution. I have to go back in and I have to edit it and add my own flavor and add my own insights.

But what would take, as you know, teams of people days, if not weeks of time, I can, no exaggeration, respond to almost any small business or government grant now in a matter of hours, if not a day or two, by myself.

Daniel Serfaty: This is a wonderful example of efficiency and problem solving actually in your own terminology. That's going to put the whole industry probably out of business, as you say, the proposal consultants, or at least force them to change. As we know, a lot of this introduction of AI in our world is more paradigm of transformation rather than replacement.

But you said something very important here for our audience. You said, "It gets me to the 80% solution." Then Gene Keselman or his team of innovators are now going from the 80% to the 100%. 80% solution gets you a viable proposal. 100% gets you a winning proposal.

Gene Kesselman: That's right. That's right.

Daniel Serfaty: So that teaming between the human expert and the AI has been crafted in a way that basically it gets you already to the point where you can express your true creativity. Is that right?

Gene Kesselman: I will completely agree with that premise, but I will caveat with that is the situation today. I think the speed of which these models are adapting, the amount of data that they are trading on and the trillions now of dollars that are being poured into this means that it is only temporarily an 80% solution. It is only a matter of time before ... I used to think it'd be five years. Now, the way these models are evolving and how fast they're moving, I think it's a matter of months, if not a year or two, where it won't be an 80% solution anymore. It will be a 95, 98% solution. Maybe it's never 100 because you never want to ever just blindly cut and paste.

But the idea of it's not just a model, it can now be actual representation of your persona. You can encapsulate ... Most of us have enough of a digital footprint out there. We have enough of a representation of ourselves where it is, again, very closely and temporally close that we'll be able to just not only get it to the 80% solution, and then fix it with 20% ourselves, but get it to the 80% solution, and then have fix that 20% as ourselves.

So the direct agent of Daniel or Gene does the last 20% of the work, and that's in your emails, all of your documents. Everything that you've created creates this persona, this agent that will do exactly the same thing almost that you would do. So that's the idea, and we're not that far away. So I will say that.

Daniel Serfaty: I don't know if ... To get very enthusiastic about it or a little bit scared, because what you're saying is basically the next generation of those systems is going to have a very accurate model of each one of us and, therefore, be able to write that proposal in a sense, whether it's a proposal to the government or, frankly, a proposal to any other funding mechanisms, including investors, in a way that is fully representative of each one of us, kind of a digital twin of sort.

Gene Kesselman: Yes.

Daniel Serfaty: Well, that will allow us to spend more time at the beach, I guess, while the proposal is being written in our name. That's an interesting vision. I haven't thought about it that way, Gene. Thank you.

So let me speak to the other side of your own personal self in a sense that you are not just nurturing entrepreneurs and incubating ideas or helping launch companies at MIT, you're also a lecturer. You also have students in a classroom that come to listen to what you have to teach, either in entrepreneurship or in how to transition dual-use technology from, say, the military to the civilian sector.

As a teacher this time, as a person who is having in front of him a collection of learners, of students, very smart students in that case, tell me a little bit about their use or your own use of generative AI in the classroom in order to do their work in the classroom. Is it affecting your teaching?

Gene Kesselman: Yes and no. So we taught a design-build engineering class last semester with Special Operations Command, and in that case it was not a big impact on our work because it was a fundamental engineering curriculum that was very obviously hands-on and experiential. And so, they had to be building ... They got a problem set, they talked to the mission user, they got the context, and then they went off and spent the semester building a prototype.

So to the extent that the students absolutely used generative AI to help them with their creativity probably allowed their efficiency of writing the engineering deliverables that we had them write. The fundamentals are that the AI's not going to build the prototype for you at this point. And so, there's still a lot of work that the students have to do as engineering students.

That being said, on the other side, to MIT's credit, MIT Sloan had a open call for a AI learning group at Sloan that I happened to be selected for and was a part of, and that's for teaching faculty that were using AI in the curriculum and how we could do it better. They gave us access to every model available. We were able to get the pro accounts, too. They gave us a bunch of new products that were coming out. Every week, there was something new coming out and we were talking about how to use them in our teaching and things like that.

So anyone that's teaching anything at any level has to be aware of the impact. I think it makes grading very difficult. I think it makes writing papers much easier. I think it makes creating problem sets much easier. There's just so much good and bad, and the teaching is going to fundamentally change because of it, even more so than it has.

I can't give you a final answer to your question, unfortunately, because everybody's figuring it out right now. Everybody is. Some think of it as a real detractor. Some think of it as the evolution of teaching should go. But it's changing. One way or another, it's absolutely changing. We just don't know yet how much is the daily answer, basically.

Daniel Serfaty: I like that answer. I like we don't know yet because that gives me an excuse to invite you again a year from now or so to the next podcast, when you tell me, "Okay. Do you remember what I told you a year ago? This is really what has happened in the classroom."

Gene Kesselman: Sure.

Daniel Serfaty: That would be wonderful. A couple of weeks ago, we had a MINDWORKS podcast focusing on that, on the transformation of education, both at the professional level, as well as in the schoolhouse, including K through 12 and at the university. And so, we are tracking that very closely here at MINDWORKS, and certainly I would be delighted if you could share your insight in a few months or next year about it.

Gene Kesselman: I'd love to.

Daniel Serfaty: Let me ask you one last quick question. In your work, you see some companies being launched. You see the student or the entrepreneur or the student entrepreneur go out and launch a company. Some of them are successful. What would be your advice for that young CEO or CTO that has just been successful, has gone through maybe a seed stage, and is now managing building a company? How can they use generative AI to be a better manager, leader of that startup?

Gene Kesselman: It's hard to say because I don't get a lot of access to the day-to-day use of AI within a startup. I will say from a product standpoint, the thing that we generally advise about the AI market is a little bit of it is taken, from my experience, in the Web3 and the crypto market of the last 5 to 10 years.

To be very clear, I do not think they're at all equivalent. I think the crypto market ended up being a huge bubble because, in and of itself, the technology just ... It's a very real technology, but the market made it that it was very ripe for a whole lot of corruption and rug pulls and a lot of bad market players. I don't think AI is like that, but I do think that there were companies just sprinkling ... I'd say sprinkling crypto over everything. Now there's this time to sprinkle AI in your startup and just figure out some way to integrate an LLM or a product.

We certainly advise against that because while you may be able to raise a little bit more money because you have AI in your title, long term that is obviously corrupting the whole idea of finding that product market fit.

The other thing we see is the focus on AI as a product. If you're not creating a foundational model, which is very, very hard if you're not working in AI data, you're creating wrappers basically, wrappers around models, wrappers around foundational models or other things. You have to be very sure, and the first question we ask is, does your startup become obsolete the next time ChatGPT releases a new version?

There's a funny anecdote about ... I don't know if it's lore or true because I'm not that involved in the west coast, but that an entire Y Combinator class was wiped out when ChatGPT came up with CustomGPTs. All they were doing was creating basically just ways to create your own custom, trained dataset model, and then ChatGPT comes out with CustomGPTs and the whole class is wiped out.

So obviously it's quite an exaggeration, but make sure that you're attacking the market in a way that's at least somewhat defendable from companies that have trillion-dollar cap tables. It's very difficult to compete with those folks.

So that's what I'm seeing. I'm sure the startups are using it on those three axes that I talked to. But everybody should be using it on those three axes. But I don't think I have a good example of just how a startup is making ... Did they raise a VC round faster because they used AI? I'm sure there's a lot of people that have made much better pitch decks with AI than ... I just don't have a really great visceral example of that.

Daniel Serfaty: No, no, but you gave us one very good examples of that and the danger of just using AI as a wrapper, because I think there will be some backlash about the market, and the investors are going to see through whether or not you're just using AI as a lubricant or you're using AI because it's part of the core of what you're doing.

Gene Kesselman: Absolutely.

Daniel Serfaty: Well, Gene Keselman, thank you very much for sharing those additional insights. Your position and your job when you straddle academia and industry makes your position particularly privileged to see really the evolution of that in those two worlds, and sharing that with our audience is great. Thank you.

Gene Kesselman: My pleasure.