The Evolving Landscape of Enterprise AI

Explore the rapidly-changing world of enterprise AI and generative AI on CXOTalk #818. Discussion includes data management, ethical practices, and metrics.

01:06:02

Dec 15, 2023
17,002 Views

In CXOTalk episode 818, Michael Krigsman talks with Anthony Scriffignano, an acclaimed data scientist, about the rapid evolution of enterprise AI, particularly focusing on generative AI. Scriffignano has a rich background in data science, having worked in various industries, and holds 98 patents. He shares his insights on AI developments, their practical applications in business, and the ethical considerations surrounding AI technology.

Key points of this episode include:

  • Generative AI and implications: The conversation explores generative AI, explaining how the technology differs from traditional AI by generating new content rather than just analyzing existing data. This has broad implications for enterprises, from improving efficiency to raising ethical concerns.
  • AI growth and value in business: The rapid growth of AI presents challenges in terms of adoption, understanding, and regulation. Scriffignano emphasizes the need for new language and frameworks to discuss and manage AI effectively.
  • AI ethics and corporate responsibility: The discussion covers the ethical use of AI in business, stressing the need to consider potential risks and impacts of AI deployment in various industries, including the military.
  • AI in business strategy and innovation: Scriffignano advises on prioritizing AI use cases and investments in enterprises, emphasizing the balance between innovative applications and responsible usage.

This episode offers a comprehensive view of the current state and future prospects of AI in the enterprise, highlighting the need for responsible innovation and ethical considerations in AI deployment.

Anthony Scriffignano, Ph.D. is an internationally recognized data scientist with experience spanning over 40 years in multiple industries and enterprise domains. Scriffignano has extensive background in advanced anomaly detection, computational linguistics and advanced inferential methods, leveraging that background as primary inventor on almost 100 patents worldwide. He also has extensive experience with various boards and advisory groups.

Scriffignano was recognized as the U.S. Chief Data Officer of the Year 2018 by the CDO Club, the world's largest community of C-suite digital and data leaders. He is a Distinguished Fellow with The Stimson Center, a nonprofit, nonpartisan Washington, D.C. think tank that aims to enhance international peace and security through analysis and outreach.. He is a member of the OECD Network of Experts on AI working group on implementing Trustworthy AI, focused on benefiting people and planet.

He has served as a commissioner for the Atlantic Council, most recently contributing to a Report on the Geopolitical Impacts of New Technologies and Data. He has briefed the US National Security Telecommunications Advisory Committee and contributed to three separate reports to the President, on Big Data Analytics, Emerging Technologies Strategic Vision, and Internet and Communications Resilience. Additionally, Scriffignano provided expert advice on private sector data officers to a group of state Chief Data Officers and the White House Office of Science and Technology Policy. Scriffignano serves on various advisory committees in government, private sector, and academia. Most recently, he has been called upon to provide insight on data science implications in the context of a highly disrupted datasphere and the implications of the global pandemic. He has published, delivered keynote presentations and participated in panel presentations extensively if various settings internationally concerning emerging trends in AI and advanced analytics, the “Big Data” explosion, artificial intelligence applications and implications for business and society, multilingual challenges in business identity and malfeasance in commercial and public-sector contexts.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Today on episode 818 of CXOTalk, we're discussing the changing world of enterprise A.I. with a special emphasis on our friend generative A.I. Our guest is one of the most acclaimed data scientists in the world, Anthony Scriffignano. And with that. Anthony, welcome back to CXOTalk.

Anthony Scriffignano: Thank you very much, Michael. Not sure about the one of the most, but I appreciate the comment. Thank you.

Michael Krigsman: Anthony. You do have an extraordinary background. So, give us a sense of what you've been working on, what you've done.

Anthony Scriffignano: I worked in many industries. I worked in manufacturing for quite a long time, management consulting with one of the big four. I worked for 21 years as the data scientist at Dun and Bradstreet, and now I'm working for the Stimson Center, which is a storied Washington, DC think tank that does amazing things all over the world. So, it's been quite a ride.

I've done a lot of inventing over those years language, a lot of focus on multilingual, something called semantic disambiguation, identity resolution, fraud, veracity, adjudication, which is judging the truthiness of things, geospatial inference on a lot of different what we would now call A.I., but a lot of different technology applied to answering really tough questions at the edge of computer science.

Michael Krigsman: And what I find amazing is that you've developed like 100 patents.

Anthony Scriffignano: Other people help, but it's only 98 right now, and it's a few ideas and then you have to defend them in different ways in different parts of the world. But yes, it's quite a lot. And that's part of my job. Part of my job is to innovate, and innovating includes inventing and inventing, includes protecting that intellectual property and telling the world that you've done it.

So that's part of that journey.

Supervised and unsupervised learning for AI in business

Michael Krigsman: What is going on with AI right now? What is the kind of status of AI and especially generative AI in the enterprise?

Anthony Scriffignano: In order to answer that question, I need to unpack with something because there's no intelligence in artificial intelligence. It's a bunch of math and A.I. algorithms, things that fall into that domain generally fall into different categories. So supervised learning is the thing that most of us will be familiar with. Machine learning, sometimes called machine learning. These are techniques that regress around past, around some reasonably representative sample of training data, and they learn about the behavior of that later, and then they get into their reasonably unperturbed future and usually in the short term.

And that's where you get things like scores or predictions that are really short term. Unsupervised methods try to coalesce the data where there isn't necessarily any a priori knowledge and, in any rows and columns that tell you what the data is. Or maybe you have rows and columns that tell you what it is, but you don't really have enough of it so that it's representative of the past.

So, think about how the recommendation engine works when it's never seen you before, which is how most of them work because never seen you before. We've seen the way others behave before, but they don't know you. So it's a little bit of a combination of those two things unsupervised. Just almost everything that you can see there unsupervised, learn from the past and then a master together, you get these hybrid methods.

Generally, the AI falls into that category of kind of a hybrid where you are if you really peel it back, there are models and there are convolutional algorithms which sort of more or less data set underneath them. What the term generative A.I. means is that instead of just looking back at the data, they generate their own content.

About generative AI in the enterprise

Anthony Scriffignano: That's where the term comes from. So that the type of generative A.I. that many of us will have seen before that term and popular was in chat bots where you're not really talking to a person, but you're supposed to think you're talking to a person that is generating content equally. Those are generally not generating memory chips, something that never actually occurred.

The thing that has happened over the last year or so is that with and I try not to name specific products, but we all know what we're talking about with particular very popular and and publicly available memory. Right. And it's become suddenly there's this explosion of mostly text based but now a lot of it based around technique for generating lots of words or if you want to have a summary of what happened with a certain piece of legislation over the last week, I just did this time to read all the articles.

I'm busy and I can go to one of these systems and say, give me a summary of of what has happened in this regard. There's something called prompt engineering now, which is maybe a better question. Get an answer that closes. So there's a AI about the AI. There's even a generative thing. I am trying not to say. “generative.” I think there's generative A.I. that looks at output that you have to determine whether it's likely that it was generated by generative AI.

So, there's this sort of Silicon Hill there's something going on in there. What I say often when we talk about A.I. is that we need some new words. We need some new language to describe this. When you hear experts or people who would actually experts talking about this field, it's a lot of words. So great. Now they try to a certain trend as much as they can to sound like everybody else that's talking about this.

That's a form of, you know, human generative thing. I don't really know exactly what it is. So let me try to use everything I've learned and generate something that sounds like it's responsive. It's, you know, we're not there yet. It's definitely still maturing.

Driving forces behind the growth of AI

Michael Krigsman: Anthony, how much of this growth and adoption is due to the fact that there's something genuinely new versus the incredible marketing hype machine that is OpenAI and ChatGPT?

Anthony Scriffignano: All of a sudden this great amazing tool became available to everybody, you know, open to the world and the world started playing with it. And it's a lot of fun to play with. And it is pretty amazing. I you know, I don't want to I don't want to put you in a certain age group, but I'm old enough to remember from in the early days of computers, there was a computer program called Eliza that mimicked a cycle and analysis session.

You know, Hello, I'm Eliza, tell me about your problems. And then it would pass your language. And if you said the word mother somewhere and there were parents who would say, Tell me about your mother. If you said certain things like problems, it would just it would say things like, Tell me more. I'm really primitive. Attempt to know that you could be tricked into believing you were talking to something intelligent when you weren't.

Well, now we're at the point where it's a lot harder to tell. And I think that certainly with this becoming available to the public, it's available to everyone. That doesn't mean it's available to everyone in a way that is appropriate to use it. And so we really have to be careful. There's a lot of folks running around in the enterprise world right now saying, how do we monetize this?

How do we need to get out there with products and solutions are using generation. We need to tell our customers and our investors, yeah, you do need to do that, but you need to think carefully about size. Being able to say you're using generative AI what new problems are you solving? What problems are you sorry, better what costs are you reducing?

What risks are you mitigating that you know, you're not introducing bias. How do you know you're not introducing a bunch of new problems that you didn't think about? There's a lot of questions that need to be asked. You don't just want to run in and it's okay to experiment. You don't really want to run in and hit that ball as hard as you can until you understand what the implications are.

Michael Krigsman: Subscribe to our YouTube channel. Subscribe to the CXOTalk newsletter. We have amazing shows coming up. Check out the homepage of CXOTalk.com. 

AI ethics, culture, and business leadership

We have a very interesting and important question on LinkedIn from Simone Jo Moore, who asks about the well. She makes the comment that the science of AI is moving so quickly. How does it line up today with business ethics and business values in the enterprise?

Anthony Scriffignano: The technology is moving faster than the ability to adopt it or understand it. Certainly, regulation is lagging behind and innovation will always be in front of regulation. But right now, it's we have kind of a lot of catch up going on around the world. And so part of what's in your question is the essence of science. You know, the scientist observes the world, asks a meaningful question, uses an appropriate method to address that question, understands the bias, understands what others have done before, observe some of the questions that's going on right now.

There's a lot of let's kick it and see what happens going on. So it's dangerous. It's not the first time this has happened. When the PC became available, there was similar behaviors when the automobile became available, there were similar behaviors. But the pace of it now is historically unprecedented. The amount of data. Forget A.I. for a second.

There's something called the datasphere. The amount of data available to mankind is increasing at a rate that is arguably unmeasurable. I say arguably because some people publish things that attempt to measure it. But if you read all the footnotes to measuring it out of it, everything creates data. Data itself begets data. Data is not like oil because data doesn't get destroyed when we use it.

So when we're certainly not using all the data that we're creating, more than 80% of it is unstructured. Generative. A.I. is pretty good at dealing with unstructured data. So there's progress, but we are keeping pace with what's happening. So you'll hear the word democratization a lot, where it's a big word for saying what happened when our point became available.

All of a sudden everybody can read a document and make it look like it came from a printer, except you didn't study fonts and, you know, layout and design and not know you're renting, recruiting, training and that didn't stop us from using it. And so you had all these death by PowerPoint moments where people just created these horrible they still happen.

People create these horrific presentations that they're very proud of. They use all the features that were available. It's happening a lot with this type of technology where there's a race to see almost how outrageously cool we can be with it. But there's that word you used responsible. So what happens if I use that data? And Jenny, I say, I'll just take an example.

How should I invest the context of returning global disruption? Good question. But the answer doesn't lie in the data that's behind me because the disruption that's in front of me doesn't look like anything that happened in the past. When there's a ship stuck in the Suez Canal that never happened before and probably never happened before. Don't tell me you can look at the data behind you and figure out what to do.

100%. You sit around 80% of it. I don't know. Maybe a but yet in general, you very careful and make sure that that data doesn't contain biases. It only contains bias when trying to eliminate bias introduces new types of bias. So that most of your customer data from a certain demographic and you learn what your customers intent is from that data, you have introduced that demographic bias into your learning.

And if you're trying to serve a more diverse set of customers, you're not going to find that message in the data that came from the ones that don't work, the ones you're trying to serve. So you really do have to step back and answer these important questions first. Just jump right into this and see what happens. Click the button, see what happens.

It's it's it can be quite irresponsible depending on the use case. If you're just trying to get your message out, marketing, maybe it's okay. And I'm not trivializing marketing, but rather brush straight from discovery rewards. You know, maybe we should be on the market.

Dangers of generative AI and machine learning algorithms in the enterprise

Michael Krigsman: Simone Jo Moore on LinkedIn comes back with another interesting question or comment, and she says that a I today generative AI is a mimic. Yes. It appears to have this intelligence. It appears to have emotion.

Anthony Scriffignano: Absolutely.

Michael Krigsman: And therefore, results in a limited kind of scope that has the appearance of greatness and depth.

Absolutely.

Michael Krigsman: And that's dangerous.

And is very dangerous. So there's something called artificial general intelligence. You know, we're not there yet. I'm not sure I want to live in that world, but we're not there yet. What we have right now, I don't want to get I'm aware that Twitter is out there. I don't want to get tweeted all over. But there's an element of autocorrect on steroids.

Here we are. We are summarizing the data. We are releasing the data. We are generating content that was like that data. But the world in front of us doesn't look like the world behind us. So we have to be careful. What you want is a summary of what happened last week. We have the technology. It's great if you want to say, what should I say about what happened last week in the context of what I think's going to happen tomorrow, maybe you ought to rethink that because that requires a crystal ball, not a crystal ball is very good mouse and very good, incredibly complex, brilliant algorithms that are less that think that we need something,

Anthony Scriffignano: that we can read it better than maybe what you might have done was certainly better than what you could have done in the time you had. It's not in there a lot of caveats here that need to be considered.

Michael Krigsman: So there are a lot of caveats. But nonetheless, the results are absolutely extraordinary, Terry. And that's the thing, right? Despite the caveats to fight, despite the fact that it is a advanced auto, auto correct statistical modeling prediction system, it gives the appearance of super usefulness and it is really useful.

Anthony Scriffignano: And it can be, and I should probably have said autocomplete. Not all correct, but yes, it can. And that's that moment. Right? And I get into this and in every conversation there will be somebody that says, Look, this is amazing new technology. We need to sometimes just try things and break things and move faster and monetize this and expanded that, let's make new mistakes.

Let's be careful that we understand the potential implications. You don't want to just rush out and use something because everyone else is using it or because it's new. You can. And in certain cases, Michael, you're absolutely right. What you want to do is generate content really fast that looks really good and pretty close to what you would have done if you had more time.

Yeah, okay, let's read it. It's this. I found articles that were published under my name as if I wrote them in different parts of the world, in other languages that I don't speak. And when I read these articles, I read them again. I said most of that. What they've done is they've combined a lot of things that have said out there into something and then probably ask me about it.

And, you know, I don't know. There's going to be a lot more of that. Those are the deepfake side of things. What happens when I slightly modify that make let's say someone wants to make me look like I'm biased or like I'm not so smart, or when I'm against a certain philosophy or against a certain part of the world I have to do is introduce one or two words.

And it's easy to do that. I use. I was speaking to a group with you earlier this week, my gosh. And I used the example. I think it's a good one. This is not a political example, right now. If I use the word Trump, it is probably a proper noun for I used that word 20 years ago. It was probably a verb.

And if you look at the only words that surround the use of that word, they have changed a lot over the last ten years. So yeah, it's fine to talk about something like asparagus because it turn around for a long time and all of a sudden we want to talk about pensions in near the wrath from Suez in the context of supply chain disruption and its impact on manufacturing.

You know how much language is there out there? There's a lot of language with almost nouns in it, but not a same time. In that context, just be careful, you'll reach your conclusions, will look growing. It may not be. They may just be very sticky.

Michael Krigsman: On this same panel this week that you and I were on, I use this example. We have a project going on right now with some folks at Harvard Business School. As a matter of fact, Anthony, you're part of that that project. So you're very well aware of this. And we are analyzing historical civics. So talk transcripts to figure out patterns, changes over time because we've done a lot of shows.

Michael Krigsman: We have amazing guests and we have a lot of transcripts. So we put a bunch of co-taught transcripts into a large language model to see what it would come back and tell us. And we did some query to find out what some of these people said and how the pieces relate to one another. And it returned back extraordinary results.

Michael Krigsman: I mean, quotes that I mean just perfect gems that suited what we needed. Exactly. And then we went back and kind of did an audit trail examination to find the source of those quotes in our transcript. You know what we discovered? It made it up that the air, it was all fake, It invented stuff. It was utterly useless in between.

Anthony Scriffignano: We didn't just pour it into a large language model, there was a lot of curation that went on in terms of using external corpora in order to contextualize the words that were being used. We had a lot of problem formulation around how we might engineer the NLP that we were using. In that particular exercise. There was something called a heuristic evaluation, where we used you as an expert to evaluate some of the conclusions that were being reached.

We challenged the data that was coming back in the context of how it might change, which are different chunks of those transcripts. So there's a very adult and a data science going on in that analysis. It wasn't and I know you didn't characterize it this way, but you saw it crystal clear it wasn't open to poor in transcripts close to push button get answer.

There was a lot of data science that went on in the background there as well. And that's how you have to do that. You want to do something like that.

AI governance and generative AI

Michael Krigsman: We have some technical questions that are coming up on Twitter, so why don't we jump from LinkedIn to Twitter and Chris Peterson and asks, he says, In terms of ethical A, I. Can we achieve that with neural net based approaches or do we need more old school symbolic AI technology? In order to achieve that.

Anthony Scriffignano: We need both and we need other things that have yet to enter the arena. So I would I would direct you to always seed and other organizations that have done a tremendous amount of work around the responsible AI and auditability and the understanding of the algorithms that are used. The the bottom line here is that the techniques that exist today are necessary but not sufficient.

And I think that's embedded in your question. You know, it's a little bit of a trick question. Should we use up with a wrench to fix this alien spacecraft? Well, you know, right now we have hammers and wrenches. We have an alien spacecraft. It's going to need hammers and wrenches, but we're going to need other things that we don't have yet.

And in the meantime, we're going to need people and we're going to need people That can be the gold standard in U.S. tech valuations. They need to be similarly incented and similarly instructed. So you get a bunch of experts that like sounds like you might be one of them and have found they have the same gain or lose by being right or wrong.

So they're not introducing any kind of that kind of bias. And you asked them all the same question and you call us out on their answers. Definitely the technologies that you're referencing are part of that because you need to do this at scale. And we're talking about a lot of zeros when you count the number of data points that we're trying to talk about here.

People can't look at all that data that the concepts of looking at a statistically representative sample are out the window because the way that this data coalesces, it's massively multimodal. And excuse me, if you're trying to do something like by your intent churn, trying to do something like fraud, absolutely not. Best fraudsters, they think they're being watched, they change their behavior.

So if you model it, you model how the best ones are no longer behave. You need people, the experts, to look at that and tell that you can't look at, no, there's none of technology right now either in the old methods or in the new school methods. We need to use those. You need to use some of the things we haven't invented yet.

In the meantime, we need these experts that are similarly incented and similarly instructed by their heuristic brain.

AI sentiment analysis and data privacy, security, and financial fraud

Michael Krigsman: We have a question from Twitter, and this is from Hue Hoang, and she says, Dr. Scriffignano, when can what can we consider when thinking about sentiment analysis as a AI evolves and the risk of increasing malfeasance.

Anthony Scriffignano: Questionable sentiment analysis. There is a lot of science around this. You may have heard of things like an analysis that says, you know, our customers are happier, our customers are getting less happy. Generally, when you peer back, a lot of those algorithms look for certain words that are positive, that that indicate happiness or displeasure. The problem is that we have certain confounding characteristics of language like neologism making of new words or sarcasm.

You know, Michael, if I say I can't see your shoes, so I'll just look on your shoes. I say, Nice shoes, Michael. You know, you don't know whether I'm being sarcastic or not. If you know that I typically love your shoes, then you're more likely to believe that that's a positive comment. If I say so-and-so is an excellent data scientist, if you don't mind failing a lot, you know, that sounds like it's positive because I said excellent.

But the independent etiquette there kind of took it away. That's the shorthand. And people do it a lot when they talk online, especially right now. We introduced malfeasance, which is the second part of the question. And malefactors are funny because, first of all, they have that element of the observer effect that when they think they're being watched to change their behavior, they have certain behaviors that we know they have that they don't necessarily know.

We know they have. For example, one of them is narcissism. They tend to look at themselves a lot to see if they're getting noticed. And so the fact that they're looking at themselves more frequently is a way of getting clued in that maybe they're not like everybody else. Right. So there's a lot of those. There's thousands of those.

And it was a pretty easy one. But two types of malfeasance. One is active and one is passive, but I'll point out right now are misinformation and disinformation. So disinformation is if I want to make you look bad, I defame you. I say things that I know are not true, usually hard to prove. And then I put them out there because they're salacious.

People tend to repeat them. Did you hear it right when those people who repeat them are repeating them? That is not it's not disinformation. They're not trying to malign you. They think it's true. They're just passing along something that's not true. Types of malfeasance. One passive when the original malefactor kind of weaponized everyone else saying something, they're likely to repeat a imagine algorithms that try to tell the difference.

And how could you possibly do that? I don't want to get into tradecraft, but you can possibly do that. There are ways to look at how often the first person versus the third person is used. For example, I approach today by looking at whether or not the language is consistent with the other language that this person uses and see whether it's likely that they're using someone else's language.

There are lots of advanced techniques in NLP in natural language processing that can be addressed in these concepts of misinformation and disinformation and sentiment. Disambiguation Are we there yet? Is it state of the art? Absolutely not. Absolutely not. Not even close. Plenty of work to do if you want to get, you know, honorary Ph.D., if you want to get, you know, become that that the guru of this is a field to get it all, then because it's definitely not done yet.

Generative AI and the AI “super-saturation” problem

Michael Krigsman: We have a nother thought provoking question from Simone Jo Moore, who says, Is A.I. such as Chat GPT going to have diminishing returns? Because as we consume AI content, do we stop feeding it new content? And therefore the A.I. feeds itself So A.I. generating content, we've run out of content for it to feed on. And now the AI is responding to existing AI generated content.

Michael Krigsman: What about that situation?

Anthony Scriffignano: The concept underlying your question is something called super saturation. These algorithms don't work well when they super saturate, when there's nothing net new. And so, yes, if all we did was use these algorithms to generate content, and if most of the content that was generated was produced by these algorithms, these algorithms, because they are largely convolutional, would converge on their own content and they would basically consume themselves and become kind of a parody of themselves.

That won't happen for a couple of reasons. One is that people still say things, and when people say things are the off to try to be not like the others you may have picked up. From the way I'm talking, I'm trying not to use a lot of technical terms. I'm trying not to use a lot of buzzwords. I do that on purpose to make what I'm saying more approachable.

These algorithms won't do well with all the words that I've said because I'm not saying words like everybody else says mostly. And so it's difficult to work with this kind of speech. People will probably learn to speak like that more, be more careful not to fall foul of being fake, right? So if you listen to the way CEOs do investor meetings, they kind of read in a monotone.

A lot of times they may say language that's very, very safe and very couched. And it's not that they're being evasive in that they're being careful that they can't get tripped up by algorithms that are going to start shorting their staff because they use a negative word is. So they've learned we will all learn that to some extent, and then the algorithms will learn that we're doing that and they'll learn to do that.

And there's a little bit of, you know, this is like the Roadrunner and the coyote. You know, we're going to keep going at each other here. But if you if you just took the hands off the wheels and actually what you say what Matt would say, that it would happen, super saturation would eventually it won't. But it's human beings are involved.

But it's definitely something to think about. All right. Very interesting question.

Prioritizing use cases and investment in generative AI for the enterprise

Michael Krigsman: Let's shift gears a little bit. When we look at generative AI in the Enterprise, how do we prioritize the use cases and the investments? Where should we be focused at today's level of technology maturity?

Anthony Scriffignano: I would say who is as we think you're speaking out because you're very much in rhetorical AI, You know.

Michael Krigsman: It's the royal we, you know, we.

Anthony Scriffignano: yeah, it's that we is, you know, governments trying to monitor potential bad guys and that we as corporations trying to, you know, maximize return to investors or if that we is researchers trying to do something innovative, that's a different answer. So you know the popular answer, the charge answer sorry, General today I answer would be, you know, maximize return on investment.

So look at the new cases that have the highest likelihood of you seeing some kind of multiple on the investment, because the investment is not insignificant here. You think that these things are open source and great, that means they're free, right? Well, no, because when you start meaning to use your own data and start needing to use other data, some of which you might have to pay for millions and millions of dollars for, if you start looking at the infrastructure that is required to produce products that use this, the technical stack, the text Act that is required for that is definitely not.

For the related part, the ability to test these things and to supervise them and monitor them. Regulatory compliance, all of these things are super expensive. So looking at the investment is one thing. As an innovator, one of the things I look at are frames of innovation. So two categories I look at. I look at doing new things, things I've never done before, and doing old things in new ways.

So if you're doing a new thing and you have to be able to prove that that new thing brings value to the organization, if you're doing an old thing in a new way, you're trying to collect costs, you're trying to increase operating efficiency, it's more, more, better, more better, faster. Those three things are how I measure it. There's another rubric that I use a lot, which it tends to resonate.

I love it. If you're doing something analytical, are you now enabling the answer to a question that that you couldn't answer before? Are you changing a decision they might have made, or are you exposing a risk or an opportunity that's relevant? And those are three very valuable frameworks. You can measure that and you can use that to try to prioritize the use cases.

This one over here only does three of those things and score them relative to each other like this. And you can do this sort of you know, it's called I'm an index, but make this thing smaller, make up and bigger. This is how they make coffee blends. This is how they make wine blends. This is how you make innovation blends to it using exactly the same techniques.

So it's a lot like winemaking.

Michael Krigsman: Folks. Keep those questions coming. Like as I said before, when else will you have the chance to ask Anthony Scriffignano pretty much anything under the sun. So ask take advantage of it.

Anthony Scriffignano: The bar getting pretty high here. So, you know, these questions are really great.

Michael Krigsman: Yeah, no, I know as we're talking and the questions are coming in, I'm like really trying to raise the bar.

Anthony Scriffignano: And I see what you did there. Thank you.

AI regulation and enforcement

Michael Krigsman: Okay, so keep the questions coming. We have here's a great question from Arsalan Khan on Twitter. Going back to the malfeasance issue that was raised earlier, he says, okay, Anthony, when we make all of these regulations and laws that you've been alluding to, how do we make sure that companies will not circumvent them? And who enforces the guardrail is a really, really important question.

Anthony Scriffignano: There was a great McKinsey study that came out recently, and I wish I could quote the part and I apologize. I can't remember his name, but he said, we've just opened Jurassic Park, but we haven't installed the electric fences yet. A brilliant quote. That's what's going on right now. So let's assume we make the right laws. And by the way, I don't assume we will.

And I'll come back to why? I don't think so in a second. But let me accept the premise of your question that we make the right laws. How do we know that I'm following them and that we know that they're making the right decisions? That's the responsibility question and there is no one answer to that. These right tend to have very large penalties.

What percentage of global revenue, that kind of thing. You can be driven out of markets. You can certainly have criminal charges in certain cases with some of these laws in your your shareholders and your customers will vote with their feet, their willingness to invest in your company or their willingness to buy your products and services. If you get caught afoul of those things, even within the organization, because this technology is so democratized, pretty much anybody can have access to it now on your phone, right?

How do you know that? Are you right And you as a as a compliance person within an organization, everybody has to be part of that compliance organization. Which single person and even still that kind of a culture and someone whose goals are make more money, increase sales, get more mentions on social media, Net promoter score. You know, if I'm if I'm judged by KPIs that are absent, the thought of how I might be doing it responsibly, then there is certainly a temptation to use all of this new technology and irresponsibly.

And so leaders in organizations have to seriously is that someone on the board is pretty good. So your question, the dialog of the board, it's someone in the legal department whose job it is to wake up every morning and understand that regulatory landscape and how it has changed. And I guess scores of countries right now and legislation in this area either under, under, under adjudication or issued as an executive order in the United States, if you read it, it's pretty intense and it's pretty long.

How are we going to respond to that? Not yet. I think the world knows that. There's certainly guidance from organizations like Nest in the United States, from the equivalent in that organization. In other parts of the world. There's things you ought to be doing. Well, you know, I ought to be doing a lot of things, you know, fruits and vegetables.

Trying to get all this right within an organization is is definitely something that requires focus and attention and measurement and and consequence. If you fail. And you shouldn't just be running around trying to see how many times you can say R when you open your mouth, you should be thinking like you're thinking. I don't see a lot of that yet and I think we need to see more of it or there's going to be a reckoning here.

In many ways.

AI impact on society and business

Michael Krigsman: You mentioned culture and this disconnect between trying to drive responsible ethical use of AI technologies in the face of corporate mandates to do whatever is necessary in order to be profitable and achieve a high ROI. And we all know those goals can and frequently are in. So my question therefore is, is there something that is unique, different, or distinct about Gen AI efforts from this age old question of honesty in the face of desire and temptation, which is fundamentally what it is?

Anthony Scriffignano: Yes, there is. You can feel much more epically, much more quickly, much more globally and possibly without even realizing it now with this technology. Other than that, it's pretty much the same.

Michael Krigsman: Elaborate on that. You say you're concerned.

I'm just so we're not clear. So, you.

Michael Krigsman: Know, I wasn't really.

Anthony Scriffignano: Sure.

Michael Krigsman: I wasn't sure about that.

Anthony Scriffignano: Okay. It's totally different because now you have technology that operates as your agent and this concept agency is very dangerous if you hire somebody to deliver explosives for you and and they trip and blow something on you and some involvement in that because they were acting as your agent. Well, now, if you have an agent that's bringing resume for you that has some kind of bias you didn't think about, we have an agent that's generating content and you're using that to make decisions.

We're being seen there. Then we don't have words to describe this digital agency, Responsible Air I because we don't have another time for it yet but understanding what your digital children might be doing when you're not paying attention, it's something we don't really have and probably never will have the ability or the path to catch out of the bag here.

No pun intended. The of the bag. Right. So this has happened and and now we have to figure out as a human race, how do we want to respond to it? I was looking at a customer service blog last night. There was an issue and I you know, you can't call a person anymore and you wind up in the chat thing and that just direct you to the frequently asked questions.

And you always feel like, well, I'm asking a question that I've never asked for frequent, you know, that kind of frustration. And it turns out at first and it looks like this organization had responded very thoughtfully and very quickly through customer about an issue that was similar, not the same. And then reading on, they responded in exactly the same way as to a different customer that had slightly different issue.

And it became pretty clear that they were responding very quickly because their ad part was responding very quickly and, you know, did the exact opposite to me. It made me feel like, you care to read this. You don't know what goes on behind the scenes. Maybe they do care. We're not going to get there and we're going to get further behind.

It's going to get worse before it gets better because of the degree of acceleration. This is accelerating much faster in our ability to chase it. It's okay. That happens with a lot of technologies. I'm not saying the world is going to end because of this and we have to be careful which decisions we give up, what agency we need to technology, what are the implications of that?

If it's responding to your customers, then don't be surprised when your first customers feel like you're disconnected because you are disconnected If you're using it to make a fast decision because you don't have time to make any decision, maybe that's a good thing. Maybe not if it be on a flight system on an airplane. Right. So, you know, we have to think about excuse me, about consequence.

We have to think about downstream. How will we change this? And we also have to think about when the next disruption happens, how will we be better or worse prepared for it? Because we have become muscles that allow us to do some of these things and now we're letting technology do for us. We get weak and we don't have the ability to respond to that.

And I could give you examples, but they involve specific companies. I'd rather not do that where companies have lost billions of dollars in market cap because of their inability to respond to something and chat bots trying to do it for them. That's dangerous.

Michael Krigsman: I'd like to come back to a very important comment again from Samoa. Simone Jo Moore on LinkedIn, who makes the point when we talk about air ethics and responsible AI and she says this is a quote, There must always be the human right to question and a decision. Now, in theory, that sure sounds good, but in practice, without having a very expensive infrastructure in place to receive comments back from a if we're talking at scale, an enormous user base potentially, while in theory sounds nice in practice.

Michael Krigsman: Absolutely does not exist today, and I don't see how in practice it could exist.

Anthony Scriffignano: In certain industries. You can require that. So for credit decisions, for job applications, all decisions or decisions about whether or not you got into a university, you can certainly require and generally there is a requirement that the subject of that decision has a right to question the the data that was used, not so much the message, although we're getting there, make that decision that affected them personally.

I have a client that has a lot of automation in it. If you call a lot of that engine if you want, it makes decisions for me all the time. It makes decisions about whether to alert me about certain things and makes decisions about how the anti-lock brakes work or whether they apply it makes decisions about whether or not to change the route.

And I'm using on the TPS. I don't have the right or the ability to question all of those. I wouldn't understand the answer to some of it. So, you know, we're going to be subject to more and more automation making decisions on our behalf to serve us. When you go back to the original definitions, they were very anthropomorphic.

They used very human terms to make decisions on behalf of compute of humans. Project human intent, behave, response to end, use the word responsibility. I think it was reasonably it's all sounds good, but when that algorithm is sitting inside a chimp, it's doing billions of computations at the edge in your in your rank system, in your car. You know at some point will an auto driving car I know someone who had an experience.

I don't know that this happened. They know that. They told me it happened, then I believe them. So there you go. I mean, one of these auto works, you know, we summon the car and the car comes and it still terrified me. I'll get there while they're in this car. An accident, minor accident occurred in front of them.

The car stopped. That actually occurred at a traffic light. Might turn green and the car drove over all the pieces of wreckage and were in the road and continued on the road. You would never do that as a person. You would never do that. What part of that wreckage was organic? It's a crime scene at the very least, or could be.

Humans would never do that. But the algorithm said it was okay to do that. A while ago, there were kids that used to make fake stop signs and taunt the self-driving cars, You know, see if they can make them stop. They would jump down and hold the stop sign. It's really funny until somebody goes to the hospital, Right.

You know, we're beyond that now that the car has the ability to say, can I see a stop sign there yesterday, in other words, in front of me, stop questions like that. But, you know, this is this is technology and counter technology and it's going to continue to happen. I don't think that it should and must are two very different words.

Should we be able to question any decision made by air and on our behalf? Yeah, we should. Will we be able to I don't think it's realistic that we'll ever get there in the world I live in because of a real proliferation of this sort of thing. So we have to do it. Use case by use case industry by industry, pick the most egregious place and where we need to require that our regulation that requires it and understand that there are countries that will not or places in the world that will not have those regulations.

And so, you know, by unregulated something in one place to regulating and regulating it in a place, you create this disparity in, there's lots of complexity that has to be considered. You're right. Which to me, logically, you're right. Realistically, I don't know how that's going to happen.

Metrics, measures, and KPIs for generative AI in the enterprise

Michael Krigsman: Let's talk about metrics and measures. How do we determine or how do organizations determine the right way of evaluating these gen AI efforts? And to some degree, the issues you were just raising come into play there as well, because the evaluation criteria can address all of these points.

Anthony Scriffignano: You said how do organizations do it? And I think there's probably different. And how should they do it? You know, it depends on the size of the organization. Larger organizations have these very sophisticated models where you have to reach a certain investment hurdle rate. So you you articulate what you intend to do in the cost and someone calculates the impact on the outcome even or whatever metric.

And they want to calculate A there are decisions that are involved with generally accepted accounting practices, whether this is raising capital as investment, whether it's expense, those kinds of things. And and so there are sort of tried and true ways of measuring the cost of an investment in some new technology. But this is innovation and this is something very different.

It's in a different category. And if you only evaluated that way with new technologies, it will either look very cheap because you haven't considered the total cost of ownership. Like all those people that need to make it keep working in the cloud environment and try and get your data back out later and all that. You know, it just doesn't get into the use cases.

You didn't understand it or everything looks too expensive because the first nuclear reactor was, you know, really hard to build. The second one was a little bit easier, you know, So it gets easier to do these things as time goes on. Total cost of ownership is is a right under ownership, fully realized value of the investment over a certain investment by expanding sort of the traditional way.

If you pick a use case, there are computers that come into play. One of my favorite ones is in sales and marketing and customer management action. So which which of my customers do I focus on and how do I measure the value of placing that focus on them? If you only serve your best customers all the time, maybe they're not going to grow anymore and they're going to do business with you anyway because they either love you, work for you, or doing business without you.

And no matter how you serve them, never going to make any more money. On the other hand, if you don't serve them, maybe they'll go away, you know, so there's ways of measuring that leads that lead management actions is the opposite of that, which is I have too many leads and too few salespeople. So which leads to I focus on you go after customers, that type of customers that look like your current best customers, which is a growth strategy.

You look at estimate that look slightly different than your existing customer. That's an expansion strategy. Do you look at customers that are successful now or customers that are likely to be successful later given the time dimension to it? So there's definitely a lot of KPIs. most of them there. You can go into the responsible frame, you can look at ESG, you can look at know the types of markets that you can serve and how that will expand or change the footprint of impact on society or on the planet.

You can look at things like engagement, employee engagement, you know, smart people, data scientists, they want to work on cool stuff. You know, if you don't let them work on this, are they going to run away and go do something, follow some other shiny object? And then now you have you don't have the people you need to support The things you were doing yesterday.

So is some element of keeping your own internal people happy because they think they're working on staff and they are. And and they're happy and engaged. You want to use this to gather your competitors, but all of those are components that you can use. And what I always advise is creating a method of measuring a rubric that lets you measure multiple KPIs.

And they should be both qualitative and quantitative. They should be scaled according to some criteria that everyone understands. And then if those criteria change, you go back and you reevaluate everything. If you only use one only use our why? I'm going to miss that innovation. If you only use innovation, you're going to forget how much it costs. You can't.

Anthony Scriffignano: There isn't one ring to rule and one here and the way of doing this, I have trying not to say scorecard because scorecards are static. They don't change when you have a rubric, a way of measuring, and that allows you to scale the relative importance of each of the components, both qualitatively and quantitatively. And you only revisit that over time.

Anthony Scriffignano: You can definitely measure these sorts of things and you can do an amazing job.

Responsible AI, governance, and corporate profitability for business leaders

Michael Krigsman: We have another really interesting question from Lisbeth Shaw from Twitter, and she says, How do you shape management's expectations and understandings of the pragmatic use of AI, especially with respect to the safe and responsible use of AI, relative to profit and margin?

Anthony Scriffignano: If the question had said, Do you, I think I would have always been better able to answer it, because, you know, there's an element of rhetoric and then some senior leadership is very focused on this. They are thinking very broadly and they have someone on the board or someone in the C-suite that is is that voice of responsible use of technology or they have, you know, whatever the technologies some organizations are, you know, we need to grow faster, we need to lead harder.

Anthony Scriffignano: We need to we need to make we both break things, but make make mistakes faster, much faster, faster. To grow, grow, grow, grow, grow. With this kind of technology that can accelerate the speed with which you hit a wall. Right. So, you know, how do you there's a very fine line between helpful and an annoying. And I think you have to stay as close to that line as possible and a very fine line between people being able to hear you and people thinking they know what you're going to say before you say it.

Anthony Scriffignano: So I try to be very Socratic about this. In those conversations. I try to say, you know, what do we think will happen if we if we don't have someone focused on the changing regulation, as if I know what I don't say it that way or what problem I'm trying to solve when I'm really rushing like, you know, let's use know, let's judgment, let's great, let's use.

Anthony Scriffignano: Jenny and what are we going to do with it? What's the problem? What's the analytic? What's that? The new thing that we're going to do? It's a question of, you know, the opportunity cost of doing that versus doing something else, which we often forget. If we're going to do this and we're not going to change the amount of resources, we're going to not be something else.

Anthony Scriffignano: So I last question. You know what? What is it that we're not going to do in order to be able to grow? Because we can't just tell those same ten people or hundred people, whatever it is and to double down and work harder and learn this new stuff like new people and all that other stuff you were doing yesterday, faster and better at some point that breaks.

Anthony Scriffignano: So, you know, there's a there's an element of of being Socratic here. There's an element of be rooted here and there's an element of slowing down while you speed up, which is a very, very difficult thing to do.

Michael Krigsman: The technology is different the underlying human dynamics for maximizing profit, especially in public companies where profit needs to be reported on a quarterly basis. But even in private companies, venture funded companies where if you don't show a certain level of growth, you as CEO are likely to be out the door or your company is going to be going out of business.

Michael Krigsman: So yeah, the technology just changed. But is there anything different, unique, special about that that changes the equation or dynamics? Because the fundamental human nature is the same.

Anthony Scriffignano: This is an externality that we cannot control. It is happening. We're not going to put the genie back in the bottle here. So while you're asking the question, I was smiling. I don't know if your viewers can see both of us at the same time, but I was reading here to hear why we're asking the question of greed is always going to be there and there's always going to be that person, usually a bully that says, yeah, whatever, nerd boy, you know, go away.

Anthony Scriffignano: We're going to we're going to do this because we're going to make more money and we're going to make money fast before everybody else knows that the people that produced radium water in the 1950s and, you know, for people to drink it because it was going to increase their vitality and lots and lots of other examples where this new thing that we didn't know anything about, let's hurry up and monetize it.

Anthony Scriffignano: And some of them are right, You know, sometimes they're right. Very often they wind up having been basically wrong. And so you get a little bit tired of selling both ways. Yeah, that's too fast. You need to go after those greenfield opportunities before the competitor. You need to claim the market. You need to make the market. Yes, you need to do all that right.

Anthony Scriffignano: At the same time, if you just say damn the torpedoes, I don't care what happens. Well, everything up. Do whatever you can to get there. Then you have to be prepared to deal with the consequence. In this case, that consequence is coming from technology that you probably don't understand and certainly can't control. And we have, as a human race, not experienced that yet.

Anthony Scriffignano: And that's where some of the doomsayers and I shouldn't call them that, but some of the people that have a more dystopian view of where this is all going are saying because of our engineering tendencies to be greedy and to try to adopt these things as fast as possible, we're going to put them in charge of who controls the boat.

Anthony Scriffignano: You know, I don't want to start getting into, you know, this thing or that thing that can kill you and it's going to be skying it. And and, you know, there is an element of that. If we if we allow many of our decisions to be if we rush ahead to too many of these decisions be automated and the consequences, then damn the consequences and shame on us.

Applications of AI in the military

Michael Krigsman: Would you summarize for us and, and also off your offer, your opinion on the use of a AI in warfare, military operations, Department of Defense.

Anthony Scriffignano: Some very smart people that are forgotten more about this than I'm ever going to know focused on that. And how dare I? However, since you asked all around the world, there are nation states that are certainly well aware of this technology and are certainly rushing to make use of it in order to protect against the certainty that the other guy is also making it.

Anthony Scriffignano: They are in, in many cases, not constrained by the same regulatory environment. They don't have the shareholder responsibilities that we talk about. There is definitely an element of responsible A.I. there. There is absolutely some of most questions, ability and knowing why you made the decision that and certainly there and in large, large part all deal with enormously complex infrastructures and already have massive amounts of A.I. embedded.

Anthony Scriffignano: So there are disciplines for ingesting new technology. There are processes that governments have set up for proposing new ideas to the government. There's a tremendous amount of oversight that goes on, an increasing amount of oversight that goes on in some of the most noble acts. Most people have the honor of meeting or involved in those efforts. So, you know, in the part of the world where I live and where you live, I think those are amazing people.

Anthony Scriffignano: They have a lot of work to do, and then it becomes bigger for them and it's moving faster for them than just you and me. But no, they haven't found that yet either. And they would say the same thing. Nobody's solved this yet because it is not a static thing. It is changing so me don't pay attention to what other parts of the world are doing and understand that in order to certainly value the same things that we value, I'll just pick on privacy because it's an easy one.

Anthony Scriffignano: You know, if one country values privacy and another country values national defense above the other, what do you think's going to happen when they start to implement technology that affects people? So, you know, that's a very real question you're asking. It's not it. People will write books on that. Those books will only have proper context decades from now.

Anthony Scriffignano: And between now and then, frankly, competing to long and disrupt everything and be some other technology that we're not thinking about that will come along and disrupt. That is not a new thing. It has happening all along. But the pace and the precedents of it is, you know, I was listening to an interview the other day and someone said, are we being reckless or feckless?

Anthony Scriffignano: You know, I mean. You can you can be on either side of that. And I think we have to be very, very careful to be asking those questions and having the right introspect on to answer those questions.

Michael Krigsman: So the bottom line is these are very important questions. There's very smart people working on these issues. And the use of technology for military purposes is obviously not a new issue by any stretch of the imagination here.

Anthony Scriffignano: But the pace of disruption is is unprecedented. They cannot they cannot use the same skills and the same techniques they've used in the past. They just don't turn the crank faster. And they absolutely have to disrupt their thinking, just like the rest of the planet Earth because of the pace of disruption.

Michael Krigsman: To really finish, we have a very interesting question again from LinkedIn, again from Simone Jo Moore, who says, Are you listening very carefully now, Anthony?

Anthony Scriffignano: And he said, Someone, I started listening very carefully. Yes.

Michael Krigsman: Okay. So, this is going to need doubly attentive listening carefully. If you could be any robot in history, which one would you be?

Anthony Scriffignano: Robby the Robot. He was there. Rock star of robots. He was in in Lost in Space. He was in all kinds of movies. He's in a museum now. There's replicas of I have Robby the robot cufflinks from Romney, the robot. Nobody ever really knew exactly what he could do because he had this this cybernetic brain that they never really understood.

Anthony Scriffignano: Absolutely ravishing robot.

Michael Krigsman: And I will have to say that in probably interviewing a thousand, maybe more of the top executives in, the world on CXOTalk we never got that question before.

Final thoughts and advice on AI and leadership

Anthony Scriffignano: There you go. You just never know. It was really not a general question. Michael, if I could. If I could. And like very often you ask me to sort of close with some sort of advice, if I were listening to this, I would feel like that's not really nice. All you did was make me feel overwhelmed. Like, what am I supposed to do?

Anthony Scriffignano: I would say three things. I would say, number one, be humble. You can't do this by yourself. You're not as smart as you think you are. Nobody is. This is bigger than all of us. And the circle ring in other people that don't think like you and make sure that you keep doing that, challenging what your team is learning, not just the people you're hiring, but the people that work there, making sure they're getting smarter and not just running around talking about how smart they are.

Anthony Scriffignano: So humility is huge, making mistakes, Make sure that you're not just hurrying up to break things and just doing the same mistakes over and over again. So it's very important to learn from failure, to learn new things, from failure, not just how to fail in the same way faster than the last thing I would say is because the world is changing around you while you are experiencing this change.

Anthony Scriffignano: Change is very difficult to notice when we're part of it, So you have to be very careful while we're building the next killer thing that we're doing. We watch how the world is changing, where we risk number one, being irrelevant, number two, becoming more relevant more quickly. And number three, huge opportunity gets missed because we're so busy trying to finish the thing.

Anthony Scriffignano: That doesn't make any sense anymore. You've got to keep your head up.

Michael Krigsman: Anthony, thank you so much for helping us make sense of this very rapidly changing time of generative AI, both in society and in the enterprise. Anthony Griffin Yana, thank you so much. I really appreciate your being here.

Anthony Scriffignano: Thank you very much. Michael. It's a great pleasure. I truly appreciate it.

Michael Krigsman: And thanks so much to the great audience. You guys are extraordinary. You ask the most amazing, excellent, insightful questions. Before you go, subscribe to our YouTube channel. Subscribe to the C XO Talk newsletter. We have amazing shows coming up. Check out the home page of CXOTalk.com to see our upcoming shows. Thanks so much, everybody. This is our last show before the New Year holiday and I hope everybody has a great holiday and a great end of year and a great day today. Thanks so much, everybody. Take care.

Published Date: Dec 15, 2023

Author: Michael Krigsman

Episode ID: 818