Generative AI in the Enterprise (with EY's Chief Technology Officer)

EY's Chief Technology Officer, Nicola Marini Bianzino, explains why generative AI is an arms race. We explore the defining characteristics of ChatGPT and similar tools, and the ethical and legal responsibilities of generative AI. Discover how generative AI will impact the future of work in the enterprise and why ethical AI guidelines for important for technology developers.

44:28

Feb 17, 2023
23,639 Views

In this episode of CXOTalk, the Chief Technology Officer of EY, who is Nicola Marini Bianzino, explains why generative AI is an arms race. We explore the defining characteristics of ChatGPT and similar tools, and the ethical and legal responsibilities of generative AI. Discover how generative AI will impact the future of work in the enterprise and the importance of ethical AI guidelines for technology developers.

The conversation includes these topics:

Nicola Morini Bianzino is EY's Global Chief Technology Officer, focused on bringing technology products to EY clients and positioning technology at the heart of the organization. With a 20-year track record of driving technology strategy innovation, he advises global clients on technology investment and their innovation agendas, providing industrialized technology products to meet their most pressing business needs.

An early AI pioneer, he wrote a thesis on the application of neural networks to business in 1997. Nicola is a high-profile global media commentator and contributes to MIT Sloan Management Review, Forbes and HBR. A thought leader on AI, machine learning, innovation and big data, he is passionate about extracting value from technology investment. He holds a master’s degree in Artificial Intelligence and Economics from the University of Florence.

Transcript

Michael Krigsman: Today we are talking about generative AI, tools such as ChatGPT, what's the impact on the enterprise. Our guest is Nicola Morini Bianzino. You're CTO of EY. Tell us about your work.

Nicola Morini Bianzino: I'm responsible for all of the technology that generates revenue for the firm, so I spent the last two years developing what we call the EY Fabric, which is effectively a platform with data and with several components with artificial intelligence on top of it that powers our service offerings. We have about 1.5 million clients that are using it every day.

Michael Krigsman: It's interesting. We don't think of a consulting firm as being a technology organization, but obviously, you are.

Nicola Morini Bianzino: I don't think there is any organization that can be not a technology organization at the same time. Look at what we're going to be talking about today. Technology is super basic in our lives that there is no business that doesn't use it.

Then if you start thinking about the consulting organization, when we provide the services to our clients that are very data-intensive, technology becomes so central, so fundamental to the quality of the service that we offer, the ability of doing it many and across many countries, et cetera.

Generative AI is an arms race

Michael Krigsman: Nicola, products like ChatGPT – Google is coming out with their own very, Microsoft has adopted this now in Bing – it's so popular, but why, as an enterprise CTO, do you care about this?

Nicola Morini Bianzino: I think what we're going to see, first of all, it's an arms race, which is great because usually when the players are competing with each other, we tend to benefit from it as consumers (in this case) or enterprises. It's very exciting where we're going to be going next in terms of the capabilities of the technology.

To be more specific about generative AI, I'm looking at my own organization, for example, which is a knowledge-intensive organization. The ability to summarize knowledge in the way some of the language models are capable of doing absolutely opens up an endless number of opportunities for organizations.

For example, how do you systematize the knowledge within the organization? How are you being anthologists in the organizations? How do you give advice to clients, or how you have, for example, scenarios where you have a sort of copilot?

Pretty much any one of the roles in the jobs that we have in our organization would definitely benefit of having an intelligent agent sitting beside us every day. Not replacing us but supporting our knowledge and our capabilities in everything that we do. So, endless possibilities. Also, not only in the enterprise but also in our lives, I think there would be a significant impact.

Impact of generative AI on the enterprise

Michael Krigsman: As a practical matter, as you think about these generative AI capabilities, how do you think about it? Do you break it down into areas that you think might be useful, or is it still just too early of an exploration phase? Where are you in your thinking?

Nicola Morini Bianzino: I think it's going to go very, very fast. We're already exploring.

In our specific business, which is to give advice to clients based on our knowledge or regulations, our knowledge of other interactions with other clients, et cetera, the ability to tapping into that knowledge in a way that is multidimensional (like some of these tools can do) is absolutely incredible and fundamental. We have several projects already at EY that are attacking that.

But I think what is most fascinating about this latest development is, we had chatbots for a long time. Really. I don't know exactly when we started, but I would say some companies had their chatbots for at least eight to ten years.

But what is different about this one is the fact that you actually can have a conversation with it. Sometimes, with chatbots, as a consumer, you go onto a website because you want to have some more information about the product, or you are complaining about the product, or something. You get into these very frustrating interactions with the chatbot that if you don't exactly hit the question the way the chatbot expects it, then you're going nowhere.

At least this is what I do. You get to a point where you say, "Please, I want to speak to an agent." A real human, right?

These tools now that are coming to the market are actually very different because they allow you to have a dialog because they can maintain the state of that conversation. The previous chatbots, you ask a question, you get an answer, and that's it. There is no memory of that interaction that the chatbot can retain. Now you can, so you can go.

It's almost like, I call it – maybe a little bit abusing the term – a dialectic of AI where you ask a question, you get an answer, and you ask another question, and you get closer to what you want to get out of it as opposed to a sort of expert system where it's question and answer, and that's pretty much it.

It's very interesting, those. I think that is the feature that, to me, is most exciting.

Defining characteristics of ChatGPT

Michael Krigsman: Would it be correct to say that you feel the defining characteristic is this ability to retain the state of the conversation?

Nicola Morini Bianzino: There are so many other things that are great. The ability of writing in multiple languages.

I'm Italian, originally. That's my first language. It writes as good in Italian as it is in English. And you can ask the questions.

You can envision meetings where you have everyone speaks his own language. I think, for global companies, would be an incredible achievement.

Today, there is still a lot lost in translation. But these tools are so refined and sophisticated now that even the nuances of a translation can be really highlighted in the right way, which is incredible.

Ethical and legal responsibilities of generative AI

Michael Krigsman: We have a question that's come in from Twitter already, which is an important question from Arsalan Khan. Arsalan is a regular listener and I always thank Arsalan for his wonderful questions. He says, "When we use AI as a copilot, we have to be careful about the data it is pulling in order to give us suggestions. If we aren't aware of biases or stereotypes in the data, then what should we do?" It's one of the fundamental questions of dealing with these types of technologies.

Nicola Morini Bianzino: Absolutely. I have lots of people inside my own company. They reach out and say, "Oh, I want to try GPT. How can we use it with clients?"

A word of caution that I think it well warranted at this point meaning that you are the human in charge. You know what I mean? It cannot delegate completely the work to these agents because they're not yet at that level of being able to understand ethical code, the morality, and what is acceptable and not acceptable unless it has been labeled by another human.

The human-in-the-loop approach is absolutely essential here. I use it as an aid as opposed to a delegation of responsibility. That cannot happen. The human has to be in charge.

You can argue that even the human has a lot of potential flaws in terms of what kind of answers he's capable of giving, but we're not, for sure, at the point yet where we can delegate a function to a tool like this.

Michael Krigsman: I heard a very interesting comment yesterday on the news from a journalist who said, as a reminder, that tools like ChatGPT or what Google is going to come out with – all of these tools – they're really like advanced autocorrect on your phone. They give the appearance of sentience, but it's just like autocomplete.

Nicola Morini Bianzino: We could go into a very long philosophical conversation what sentient really means, but I agree. I would say the autocorrect maybe is a little bit too trivialized as a definition. It definitely is not. If autocorrect is on one of the spectrum and on the other end is sentiency, it's in the middle.

Sometimes – whether there is a new technology, especially this kind of technology that are a little bit more difficult to understand because they're actually quite complex – we tend to give them a sort of alienation. We transfer to them the expectations that we'd have from another human being. If we do that, that is the wrong approach.

This is not a human being. This is a bunch of silicon and a lot of data that gives you some kind of an answer.

But if you actually think about it as sort of innate – I am taking an airplane to go from San Francisco to New York – that is what these tools are. It's a machine that helps you do things faster. But if you think about it, it's not that different than at the beginning, truly, of the mass adoption of the Internet.

I graduated in the '90s from university. Whenever I had to do something, a research paper, I had to go to the library in my city to find the physical book. It was taking forever to get the information out, et cetera.

Then the search engines came up, and that was incredible because you can access the whole knowledge of the human race at a keystroke. That doesn't mean that we assign to that technology like a super-human ability. It's something that helps you search things better.

I think we need to look at these new tools in that same way. Show me what's possible. Show me the search criteria. Show me what the search results are, and then I need to be the one that makes the decision on what is relevant and what is not.

Michael Krigsman: I think one of the challenges for folks in the enterprise, or just in our personal lives, is that the answers these tools gives comes across as being authoritative.

Nicola Morini Bianzino: Yes. Yes.

Michael Krigsman: But it may be totally incorrect.

Nicola Morini Bianzino: Absolutely.

Michael Krigsman: [Laughter]

Nicola Morini Bianzino: [Laughter] And that is what we shouldn't—

This is the thing, right? If we think about, if we start talking about human characteristics, and we transfer them to describe a tool, we make a mistake.

We will never call a car "fast legs." Maybe that's the wrong example, but you see what I mean.

If we use them, we need to put them in sort of a box. Like today, you do a Google search. It comes back with millions of pages sometimes. It's up to us to select what is the page that has the highest level or relevance of what we're looking for.

We need to look at it the same way, as opposed to, okay, there is superhuman intelligence there that knows everything. That's why I think, when you think about academia as well, if we start relying on these things too much, we lose the fundamental understanding of a domain or a subject. It can become a little bit dangerous from that perspective.

Michael Krigsman: Please subscribe to our YouTube channel, hit the subscribe button at the top of CXOTalk.com (our website), and subscribe to our newsletter.

We have another question from Twitter that's related to this. This is from Chris Peterson who says, "In business communications (whether it's B2B or B2C), do we have an ethical duty to call out what's written or edited by an AI assistant?" He says, "If a human reviews it, modifies it, and hits send, does that change this ethical duty?"

Nicola Morini Bianzino: Today when you publish a paper, there is a process to publish a paper. You need peer reviews. You need to cite your sources. You need to do all these types of things. You cannot just say, "I came up with this," without. That would be plagiarism. I think there has to be a framework similar to that one in this space.

If a client, for example, asks me about whatever, some regulation somewhere, I can do some research, and I can use ChatGPT, I can use another tool, or I can use my own traditional Google searches or something to get the information out, my own internal knowledge, and to provide that answer. When I do that in an email, I think it is fine to do it as a human being. There shouldn't be any other way.

But if I'm producing a paper just using that, relying on somebody else's work and I don't recognize that and I don't do the citations, and I don't do this and that, then of course, I'm getting into plagiarism, I think. It's the same way that a human would be doing a research paper like that. Complicated, so I think there is the need, especially in IT, in the whole legal discipline of intellectual property as well, that there is a need of some kind of a rethinking about some of the fundamentals of that because this is different.

Michael Krigsman: Yeah, and clearly there are a whole set of ethical and legal issues.

Nicola Morini Bianzino: Yes. Absolutely right, and so that's why I think we need—

It's the philosophy, the most ancient academic discipline is still very much alive because we need to ask ourselves all of these questions. We need to have frameworks to address them.

When the beginning of intellectual property came up as a protected right, in those days when the legislation started to get introduced, there were similar conversations (I'm expecting). You're not supposed to take somebody else's intellectual work and make it yours. I think we need to have that dialog today.

What is going to be much more difficult today is that these tools, the marginal cost of producing or generating these types of documentations or whatever we generate is almost zero. We will be inundated by this volume of documents, papers, books, images, and stuff. It will be really hard to understand what is really generated by a human, what is not, et cetera. Like Chris was asking, identifying the content will be a really good step, first step, I guess.

AI governance in the enterprise

Michael Krigsman: Arsalan Kahn comes back again, and he says, "When everyone is, quote, "doing AI," what sort of governance structures should be placed?" I love this. Let's write a thesis on this. "What sort of governance structure should be in place at an organizational, governmental, [laughter] and at a global level?" Why don't we just simply start with organizational governance of AI? That's going to become important, very important. Any thoughts on that?

Nicola Morini Bianzino: For example, I was mentioning the example of a knowledge company like ours. First of all, you want to be able to store that knowledge in a way that is relatively secure.

Not secure, I mean, of course, from attacks from the outside. That's one piece. But also secure in the sense there is the knowledge that the company wants to protect.

If you think about anthology within an enterprise, and you have the definition of some entities or concepts, you want to make sure that the definition is shared by everybody, it is approved, et cetera. I think there is that kind of need of establishing – I'm going to say, but it probably is the wrong word – an editorial board for this type of knowledge that gets systematized within the knowledge repository of the enterprise.

The concept of what is revenue, for example, for another company, it can be different than what is used on the street. And so, when people are searching for the word revenue, there has to be an approved definition of it behind the scenes.

To me, that is highly curated content. You have to have an organization, I believe, that looks after that, makes sure that nothing—

If you ask for revenue and it gives you another answer, that's not good. That type of potential divergence has to be understood, has to be prevented, and it has to be managed. That to me is the key.

Some other areas where they're kind of being true to the book definition of things is not that critical. I think a little bit more dialog could be welcome. I think we need to let the tool itself learn and evolve, et cetera. Too much control and too much government, I don't think, is going to be a good thing.

I guess the answer is it depends. Typical consulting answer, but it depends in the sense that what is the type of knowledge that you want to maintain, govern, and protect as opposed to the other ones that you want a little bit more free-flowing and more interactive.

Michael Krigsman: That makes a lot of sense. And looking at other important domains inside an organization that require governance, this is a natural kind of approach to use that as a model, but then be open to the fact that, as you said, generative AI will change and evolve over time, and so we can't be too locked down about it.

Nicola Morini Bianzino: You cannot constrain it too much, I think, because otherwise, you lose the value of it. It's just curated knowledge. Curated knowledge, it's rearview knowledge, so you're not able to anticipate what's coming.

What jobs may generative AI make redundant?

Michael Krigsman: We have another really interesting question, a short question, a hard question from Anil Vyas. He says, "What kind of jobs may become redundant by generative AI tools?"

Nicola Morini Bianzino: I don't think there's technology yet. Okay. I don't know about the next five to ten years. That's different. But right now, I don't see, again, the delegation of the function completely towards another tool.

Even if in the creator's economy, even if you create art like digital art in your life, that's your job, I don't think it can be completely created by the machine. We're seeing AI art, et cetera. But at the same time, I think if you are an artist, and if you are a creator, there is a lot of additional value that you can add on top of what the machine creates by itself.

In the enterprise, I see more jobs created than less or than jobs disappearing because I see a lot of opportunity, for example, for people that can systematize data in a different way. Again, I am very big on this concept of anthology because I think if you can store the enterprise knowledge, its value translating to shareholder value. It also allows you to protect the IP and the capabilities of an organization.

Think about our organization, 360,000 people, 8 hours a day, 2.4 million hours of work every day. If you can capture the knowledge, structure it in a machine, and then having give access to it to our clients and to our own people, that makes our company even more valuable than what it is.

I see more opportunities than less, if that makes sense.

What is the impact of generative AI on knowledge management?

Michael Krigsman: You have been alluding to and talking about this knowledge management function. How does generative AI change knowledge management relative to the historical tools?

Nicola Morini Bianzino: If I look at my company – I'm not going outside of our business – that is to me the low-hanging fruit. The fact that we are able to access all that knowledge at the fingertips in a way that is human-friendly.

Today, if I had to say, for example, in our tax business, if I had to help the client understand the taxation regulation around ten countries, usually I have to read millions of pages of stuff, and it has to be an army of people doing it. If I have a tool that (with a level of confidence) summarizes all that information, I can spend more time talking with clients about their options than just reciting what the regulations are. People will be happier, will have access.

Of course, everything has to be curated, as I said, especially in this type of business. It has to be curated, managed, and governed. It cannot be the wild west because you don't want the machine thinks that we're looking at the legislation of Switzerland and, in reality, we need to talk about Austria. That is a potential issue.

When you have done that, I think the research and the work that we'll be doing will be more like the number of hours that we spend doing more valuable work will be, I think, higher and people will be satisfied. And the quality of the work that we'll be doing will be higher as well.

How will generative AI change the future of work in the enterprise?

Michael Krigsman: How will these kinds of tools change work inside an enterprise?

Nicola Morini Bianzino: In every business application, the one that we build – we don't have it yet, but we're going in that direction – is to have, like I will have, my little sub-window or a frame on the side of the main screen that allows me to ask questions on the fly in real-time about what I'm doing. Think about someone gets out of school (23, 24 years old, out of university) to join our company. Instead of getting them to do weeks and weeks of training and asking, making sure that they get to the right person as a mentor, et cetera. They can do that, of course, because the human relationship is so important. But at the same time, if they can access that knowledge, institutional knowledge, codified in the system, it would be much easier for them to work and to grow.

That's the copilot concept. It's not like that function will be automated or replaced. But I have this agent that I can talk to and ask questions. It would be fantastic. I wish I had that at the beginning of my career.

Michael Krigsman: I do a fair amount of writing, and I use ChatGPT, and I just subscribed or was put on the user list for Microsoft's Bing, and I can't wait for Google to come out with their product. I actually subscribe to several of these products. As a copilot, I have to say it's a phenomenal thing.

Nicola Morini Bianzino: Yeah.

Michael Krigsman: I mean the body of knowledge, the efficiency that it helps, and just coming up with new ideas, it's very fast, much faster than I could do on my own.

Nicola Morini Bianzino: Absolutely. The new ideas, they're not really the machine.

This is the thing, right? We shouldn't think about alienating ourselves in a sense that we give these tools too much responsibility in terms of the level of intelligence because, ultimately, there is always (today, at least) a human behind the scenes that has curated that content, has structured it in a certain way, has trained the model in a certain way. It's still human knowledge, but the nice thing about it is that you can have access to it immediately.

I was making the example earlier of the old-school hardcover type of library. You had to find the book in the library to be able to talk about a specific domain. Here you can have access to an incredible amount of data very, very quickly.

Michael Krigsman: We have another really interesting question from Anil Vyas. He says, "If the data set that trains the generative AI has an intrinsic bias, how can we get rid of this bias or overcome this bias so that it is neutral?" I think he's talking about two things: one from the point of view of the software developer, and number two from the point of view as us as consumers, as users.

Nicola Morini Bianzino: The people that manage the data that trains the model, that's what it fundamentally comes down to. If you feed it (in the model) racist type of concepts, the model will return that. That is a bias of any kind.

Fundamentally, it's a human responsibility. In enterprises, as I said earlier, we need to govern that.

First of all, we need to have values, enterprise-level values, and things that you wouldn't do and not do because that will help guide what is the concept of bias and how people should manage that part. Then at the same time, I think, as I said earlier, there has to be an editorial board of some sort that oversees and takes the responsibility for what goes into the machine, ultimately. There is risk, for sure, and it's something that we need to take very seriously.

Legal implications of generative AI

Michael Krigsman: This is again from Arsalan Khan who says, "When it comes to life and death situations and the AI is wrong, who should we sue, the AI vendor, the data owners, the algorithm creators? Who do we sue when something really goes wrong?"

Nicola Morini Bianzino: I think we should not get in the position where it's a question of life and death left to a machine. That to me is the number one.

If you had to get to a position where the machine has to make a decision that has the power of driving an event at this level, I think there is something wrong in the way we're approaching it. These technologies are not at that level yet, and we don't want that, I think, yet, until it can be proven in a different way. If let's say it's a medical diagnosis that has to be provided, the expectation is that there is always a physician in charge (with the help of the tools), but fundamentally, it's down to the human.

You can think about stretching it out a little bit from just the generative AI piece of it. We're not ready yet to do fully self-driving cars for a number of reasons.

One of the reasons is not just technology. It is about that kind of who is liable, who is accountable, what kind of ethical decisions need to be taken, et cetera. If we're not ready there, which is, in a way, a more mechanical way of doing it, we shouldn't be ready to delegate, again, the responsibility for these types of decisions to just to a machine – yet.

Issues

Michael Krigsman: “Why do tools such as Google Bard or ChatGPT hallucinate?”

We have another great question, this time from Lisbeth Shaw on Twitter. She says, "Why do tools such as Google Bard or ChatGPT hallucinate even when it's using correct source data, and what are the implications, therefore, for using these kinds of tools for knowledge management inside an organization like EY?"

Nicola Morini Bianzino: There is that risk because, at the end, it's reinforcement learning. You learn and the answer that gives the best results are the ones that win.

There is a sort of selection mechanism of the answers. The answers that are rewarded are those that are going on a tangent. Probably you can get in this scenario.

At the end of the day, it's a statistical tool, right? We call it artificial intelligence, whatever. But effectively, it's the application of statistics to a very complex program. But it's still statistics.

There are absolutely situations where statistics gets you into the outlier. Absolutely. That's possible. I don't know. I'm not a mathematician, but I'm sure there are lots of ways that this can describe mathematics.

In terms of what needs to happen for the enterprise, I said it earlier, that should not be allowed because you don't want to have a tool speculating on what tax legislation might be (if I look at our own business). We need to give specific answers to clients.

That's why it's kind of a curated approach to it for the time being is absolutely critical. It's an investment that every company needs to make both from the buyers, the ethical side, but also for the content of the answers and the questions, et cetera. There has to be some sort of a labeling that happens on the tool when you give an answer and you train it to make sure that this is the answer that is proper.

Challenges to enterprise adoption of AI tools

Michael Krigsman: Can I ask to what extent are you thinking through these kinds of processes at EY?

Nicola Morini Bianzino: We are experimenting, so we're trying to understand how much work needs to be done, et cetera. We are in the process of defining all of that.

I think that's why the previous question, "Are we going to replace people?" I don't see that. I think the more we use these tools, the more we need to add human expertise to it, so it's going to be the opposite. I think, for at least the next few years until these tools mature and there is more knowledge about it, there is definitely the need for investing in the enterprise in human capabilities around that.

Michael Krigsman: Chris Peterson on Twitter comes back, and he's asking if you have any thoughts on cybersecurity issues created a need for the cyber insurance industry. Do such firms and policies have a role in shaping and governing the use of AI in the enterprise regarding liability?

Nicola Morini Bianzino: I think, in the future, probably yes. Again, I don't see any enterprise function completely delegated to AI. And so, if it's always a sort of a center model where there is the human and the machine together doing something on behalf of the enterprise, I think, fundamentally, it's just the same level of liability and responsibility you have today.

In the future, for example, let's say credit decisions or other types of things (without going into the most dramatic healthcare-related decisions), I don't see why not there could be some level of insurance. But in my understanding so far (and it's for sure not complete), I haven't seen anybody yet that is planning to completely run a piece of the business like that in an automated fashion.

Michael Krigsman: What are some of the challenges that you see to adoption of these kinds of tools in the enterprise?

Nicola Morini Bianzino: One is overhype. I've done, for most of my career, AI starting from school, basically. I've seen these ebbs and flows of excitement.

I started really in '97, and it was the winter of AI. Nothing really happened for years, even if there was a lot of potential. In those days, it was all about expert systems, you know, that kind of knowledge because we didn't really have the technology in terms of the processing power, the memory, and all those kinds of things.

Then it started getting into some games they would win, like chess world championships, blah-blah-blah. I'm not going to make the whole history. But every single time that there was a major breakthrough, like the Deep Blue with IBM did in the '90s beating Kasparov, it was a huge interest and everything. Then it went down again for a few years.

Then deep learning came up, and we started dealing much more with image recognition, which I think is a great application of artificial intelligence, so then another big spike of interest that actually is leading to this new language model. There are similarities.

Sometimes we tend to, again, humanize these technologies to the point that our expectations of what they can do becomes the same level of expectation we would have from a human. But it's not that.

I think that if I was an enterprise (outside of mine, of course), I would spend time trying to demystify, in a way, what these things can do but, at the same time, showing the value that they can provide because there is a lot of value out there. It's just that, unfortunately, what happens is that people get overhyped, and then they get overly disappointed.

Maybe in a year from now, we're going to say that ChatGPT was a great failure, even if it's actually an incredible innovation. I don't think so, but it could happen. It could happen because people think that these things can do things that are comparable to what a human being can do. They can't right now.

Advice to business leaders on adopting AI in the enterprise

Michael Krigsman: What advice do you have for folks in the enterprise who are business leaders and technology leaders who are looking at these kinds of technologies and trying to figure out what to do?

Nicola Morini Bianzino: The bottom line is that this is going to go really fast. Again, I was saying at the beginning, it's an arms race between super-powerful companies that have a lot of money to invest. Then you start getting into government, et cetera.

I think this is going to be exponential. The speed of these innovations will be exponential. I think we're just seeing the beginning of it.

I think there are two things to think about. One is, right now, you need to be knowing what it is and being able to understand how to apply it to your business.

There is no postponing it and say, "Oh, I'm not going to be an early adopter. I'm going to be a very fast follower."

A very fast follower is a super-fast follower. A very fast follower could be six months.

I think, for large enterprises, they can't afford to do that. I think there is the need of experimenting, hiring the right talent, or training the right people, to get to a good handle on where this industry is going. You have to be keeping an eye on this. That's the bottom line.

Then the other thing is that I think lots of questions are very, very good questions about the ethical implication, the bias, and all of that. It's really important because it has a massive brand impact. There is also a risk management side around these technologies that has to be understood very well.

If I would be a CEO of an organization, I would push my technology team to have the right skillsets to understand what it is and where it's going. And at the same time, I would put in place some risk management thinking around it as well.

Importance of ethical AI guidelines for technology developers in the enterprise

Michael Krigsman: To what extent do you think that the developers of these tools and the folks who are gathering the data sets need to have that ethical understanding? The reason I ask is because, historically, technology was kind of separate from the ethical or the application of that technology from an ethical perspective.

Nicola Morini Bianzino: It is absolutely critical. Think about what's happening with social media. How important is that content moderation in social media?

This is not exactly the same, of course, but if you translate it into this space, there is opportunity for abusing these tools everywhere. As I said, the brand impact is massive, so there is the need of setting guidelines.

For example, internally at EY, we have specific guidelines on how to use the tool, what we can do and not do with it (for now). Maybe a little bit restrictive, but we want to be on the safe side of things. Then we're going to open them up a little bit more as time goes by.

Everything starts with a true understanding of what is available and possible. If you don't understand that in detail, it's difficult to put the boundaries around it.

Michael Krigsman: Today we're really at this phase where we're understanding what's reasonable, what's practical, what can we do.

Nicola Morini Bianzino: Yes, and it shouldn't be underestimated in terms of the ability of driving innovation. But it shouldn't be either overestimated in terms of being a sentient being or anything like that because I think we're far from that.

Michael Krigsman: I have to say the interesting thing about this question to me is the fact that people have this conversation because, at times, these chats mimic human conversation so well.

Nicola Morini Bianzino: Yes, it's incredible. I think what they have done is absolutely, in my mind, one of the most incredible inventions of definitely this century, right?

There is the iPhone, mobile phones, social media, all that kind of stuff. Great. I think it's super important. But this one, it goes towards another level of innovation because it almost looks like reasoning of a human. It's absolutely incredible, so I'm super excited about it.

Michael Krigsman: Why don't we finish up with another question from Arsalan Khan who says on Twitter, "What do you hope future iterations of AI can do?"

Nicola Morini Bianzino: I think the future iterations have to be more transparent. You guys have asked fantastic questions about bias, ethics, and all of that. That is a problem, right?

These technologies are powerful. If they go in the wrong hands in terms of being able to generate content that change behaviors public understanding of things, it's really dangerous because we're basically adding another layer of intermediation between us and the sources of knowledge.

If that is done with not the right ethical, I guess, it can influence politics. It can influence elections. It can do a lot of different things that we don't want. I think that transparency and ability to monitor what it does and how it's trained is really important.

But then I cannot wait for the future because there is so much more. Think about the breakthroughs that we can have when the sum of the human knowledge is so accessible and in such a smart way.

It will take years. It's not something that, oh, on Monday, we're going to really implement nuclear fusion. That's not the thing, but in terms of stimulating a different level of thinking and summarizing this knowledge, I mean it's incredible. I'm so excited about it.

Comparison of ChatGPT to low code, no code software tools

Michael Krigsman: We have one last question that snuck in under the wire, and so how about if we finish out with something from Melvin Aguiar who says, "How comparable are products like ChatGPT to low code, no code? Will the coexist or one eliminate the other?" I think he's referring to the fact that these tools are really optimized to help write code.

Nicola Morini Bianzino: Yes, I think right now it's not that big yet, but definitely that is the direction. You would ask how much code would you have to write in five years when you can actually talk to a tool and say, "Can you build me this application with these characteristics, these inputs, outputs, data, et cetera?"

I think, what they say, the writing is on the wall in terms of software engineering will have to be changed completely. It doesn't mean that there are not going to be software engineers. Absolutely not.

But I think the role of the software engineer will have to change a little bit more and get closer to the business because we still need the human that can summarize those needs and requirements into an ask of the machine. But do you need armies of people to code routines in different programming languages? Maybe not.

Michael Krigsman: With that, unfortunately, we are out of time. It's been a fascinating conversation. I want to say a huge thank you to Nicola Morini Bianzino. Nicola, thank you for being here with us today.

Nicola Morini Bianzino: Thank you for having me, Michael.

Michael Krigsman: Everybody, thank you for watching, especially those folks who ask such great questions. I always encourage you to watch and ask questions. You guys are an amazing audience.

Before you go, please subscribe to our YouTube channel, hit the subscribe button at the top of CXOTalk.com (our website), and subscribe to our newsletter. Tune in for our live shows, and we will see you again next time. Have a great day, everybody.

Published Date: Feb 17, 2023

Author: Michael Krigsman

Episode ID: 779