2017

2017

Building AI: A Software Maker's Perspective

Sean	Chou, CEO and Co-Founder, Catalytic
Sean Chou
CEO and Co-Founder
Catalytic
Ed Sim, Founding Partner, Boldstart
Ed Sim
Founding Partner
Boldstart Ventures
Keith Brisson, CEO and Co-Founder, Init.ai
Keith Brisson
CEO and Co-Founder
Init.ai
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Artificial Intelligence is surrounded by marketing hype, making it difficult to assess what's real and useful. In this episode, we talk with a venture capital investor and two software entrepreneurs to learn what's involved with creating products that rely on artificial intelligence and machine learning. Join us as we cut through the hype of AI.

Ed Sim is the Founding Partner of Boldstart Ventures. He was an early believer in SaaS and led first round investments in market leaders like LivePerson (Nasdaq: LPSN) and GoToMeeting (acq. by Citrix). Over 19 yrs, he has led many seed and first rounds and helped a number of entrepreneurs successfully scale from seed to market leader.

Sean Chou is the CEO of Catalytic, which makes the Pushbot platform. Prior to Catalytic, Chou was the Chief Technology Officer and EVP of Services at Fieldglass. Chou was responsible for the overall development and delivery of Fieldglass solution and services, oversaw the product development, hosting, professional services and marketing departments. He provided the strategy and vision for Fieldglass’ award-winning cloud solution since inception.

Keith Brisson is CEO and co-founder of Init.ai, a venture-backed developer platform that enables companies to create conversational apps. Keith is a software engineer and has extensive experience building conversational applications using modern machine learning techniques. He focuses on and writes frequently about the end-user experience of conversational applications, the machine learning and technology that powers them, and the future of human-computer interaction.

Download Podcast

Building AI: A Software Maker's Perspective

Michael Krigsman: Welcome to Episode #222 of CxOTalk. I’m Michael Krigsman, and we have a really interesting show. We’re going to be talking about artificial intelligence, machine learning, and natural language processing. And, we’re going to be taking a development perspective, from the point of view of people who are actually creating products. And we have with us today two company CEOs, as well as a venture capitalist and investor. So, let’s dive in and Ed Sim, you are Guest Number One, or at least, I am introducing you first. How are you, thanks for being here, and tell us about yourself.

Ed Sim: I'm good! Thanks for having me! I really love your show, and I'm glad to be here with two of the founders in the Boldstart fund. So just real quickly, I'm a founder and partner of Boldstart Ventures. We like to be the first check in enterprise founders with respect to AI and ML, etc. We go for the opportunity to look for AI and ML companies. We look for enterprise businesses solving big problems, and if they happen to use AI or ML, like Keith and Sean are doing, then that's even more exciting for us. So, looking forward to chatting about this.

Michael Krigsman: Fantastic! And, you have been an enterprise investor for a long time.

Ed Sim: Yes, yes. Over twenty years done stuff in the back-end with the SaaS side, and companies like Greenplum, GoToMeeting, LivePerson, along with Init.ai and Catalytic.

Michael Krigsman: Fantastic. Our second guest is Keith Brisson, who is a founder of the company Init.ai. Hey Keith, how are you?

Keith Brisson: Excellent, Michael! Thanks for having me!

Michael Krigsman: So Keith, tell us about Init.ai.

Keith Brisson: Sure. Init.ai provides technologies that help enterprises converse with consumers at scale. So, we provide language understanding technology that helps them automate customer conversations, assist sales, and support agents in live chat, and analyze conversations that take place between their users and their agents. These can be external-facing conversations. With consumers, it can be internal as well, allowing employees to access information from systems of records, CRMs, clients, etc.

Michael Krigsman: So, essentially, would it be accurate to say you’re building a natural language processing/AI - hate that term, AI …

Keith Brisson: [Laughter]

Michael Krigsman: … toolkit that enables large companies to build their own products?

Keith Brisson: Exactly! So, we’re providing the technology that lets it unlock the data and the information within those conversations so they can incorporate them into their workflows, whether it’s communicating with their customers, or making their internal processes more efficient, it’s our belief that we should be the ones to take that advanced technology and bring it to them rather than them needing to do it internally.

Michael Krigsman: Fantastic. Well, I’m looking forward to diving in. And, Sean Chou, you are the third member of our behind-the-scenes in AI panel, and you are the founder and CEO of Catalytic.

Sean Chou: Yup. Thank you for having me on. So, Catalytic: We create a product called “Pushbot,” and our customers using our product can quickly and simply create these process bots that leverage work orchestration, automation, and AI; which I agree with you by the way, as a term, it’s definitely a little bit irritating, but now I think it’s popular and it captures a whole broad range of technology which I’m going to talk a little bit more about. But, with our product, our customers really focus on operational efficiency, reducing the number of dropped balls they might have, and tying together a lot of the people and the different systems within their company.

Michael Krigsman: So, you mentioned the term, "process-bot." What is a process-bot?

Sean Chou: Yeah. As entrepreneurs, we feel obligated to make up terms.

Michael Krigsman: [Laughter]

Sean Chou: [...]

Michael Krigsman: Now that's good, and large companies do the same thing. But seriously, when you say "process-bot," tell us what you mean by that?

Sean Chou: Uhh, so, I say mostly to separate us out from, let’s say, chatbots, which are a really popular concept. We do have conversational interface aspect[s] to our platform, so in that regard, it’s kind of got some chatbots. But, that’s not really the main purpose of what we do. The main purpose of what we do is really; and the main purpose of what Pushbot does, as a bot; is to push and promote a process. So, we like to say, “Processbot,” most[ly] in a way to kind of constrain down what we’re doing.

Michael Krigsman: I think a good place to kick off this discussion is all three of you are very actively involved in the consideration of different types of AI and what does it mean. And, Sean, maybe you can begin by helping us understand what do we mean by the term, “AI,” and what does it actually encompass?

Sean Chou: Yeah. For sure. It certainly covers a lot of different things, and I think with all major new technologies, there’s always this retrospective period where people look back a little bit and say, “Hey. This really looks like it should be under this umbrella,” and you get a lot of repackaging of things that once maybe weren’t part of AI, but now, because it’s the hot, buzzy topic, now get rebranded under AI. But, I think generally, when we think about AI, we think about it in three different categories.

There's really a strong AI, which is to try to create basically machines that are able to think in a general sense, in the same way that you and I are able to think. So, "strong" or "general AI," there are only a handful of companies that really should be considering that. You need a ton of resources; it's the Google, Microsoft, Amazon, you know, of the world that are going to sort of win in that type of space.

The second category is really more "weak AI," or "narrow AI." And, that's not as difficult. It's still extremely hard, but now what you've done is you've set instead of a general, thinking machine, we're going to focus on a specific domain or a specific field. And so, you see a lot of that in virtual assistants like Siri or maybe Ingram, or Clara, you know; these are folks who are saying, "We're going to create AI," and its personality oftentimes, but it's only going to solve a very narrow set of problems.

And then the third category, which I believe Init.ai and I both fall into, which is: The users of the technology and the research has come out of all this primary research on AI. So, we are beneficiaries of the research that’s gone into natural language processes, sentiment analysis, machine learning; all the things that kind of power AI, we take them, we repackage it, we make it, at least in catalytic space, we make it available for the average business so that they’re able to use it in their processes. And then we’re using it for our product in a very, very applied setting. So, you know, it’s not machine learning to be able to act as humans, machine learning will figure out how to improve their business processes.

Ed Sim: Yeah, and Sean, I think that’s a great point. Kind of how we look at the world is applied AI. I mean, AI is such a buzzy word these days. Everyone has AI in their business plan, the kind of time that everyone had “.com” back in the day. And the reality of it is, what business problem are you solving? And applied AI is very exciting because you’re not going to out-Google Google in AI learning, or Facebook with that AI, but how do you leverage best what’s out there and then apply to enterprise data? That’s the data that they don’t have and can get, and that’s what I love about what you guys are doing and what other companies are doing as well is working on that private enterprise [...], and learning from that.

Michael Krigsman: What are some of the really interesting use cases for this type of applied AI that any of the three of you are seeing?

Keith Brisson: I can give a few examples. You know, I mentioned we're focused on making customer conversations more efficient. You know, right now, companies need to hire teams of agents to serve their consumers and support in sales areas. And, it's our belief that we can take technology that happens to be using machine learning to make that process more efficient by offloading some of the routing portions of that to the computer so that the agent doesn't have to go through those routine processes themselves, you know.

So that is, for us, is helping displace the actions that people would not take otherwise. And, kind of a more broad scope, machine learning is incredibly powerful in finding patterns in data. So, you see it in all areas of enterprise uses where you're trying to detect signals in large volumes of data that matter to the business.

In finance, it has been used for fraud detection for ages, where you’re trying to find something that really matters to the business in this massive set of data. And, that’s just going to expand where you have more and more data coming in from different sources. You need to figure out how to extract real value from that unstructured data, make your business more efficient.

Sean Chou: Yeah, for sure. You know, it’s interesting, and I don’t think it’s a coincidence that AI is the trend right after kind of data science. Like, remember … This or a year ago, we’d all be saying, “data science.” But, I think there’s a reason for that. It’s because data science ended up producing so much data and kind of cheap storage, cheap processing power; just created so much data, that data science came in and said, “How do we make sense of this data?” But, we’ve actually reached a point where people have thrown up their hands and said, “We can’t make sense of this data,” and so we really need other things to make sense of this data, and those other things ends up being AI and that actually ends up being a great initial use case for AI, which is definitely one of the areas in which … I think also IT.

The notion of AI being used for human augmentation, as opposed to trying to replace humans, is, I think, still the right current curve for AI. I mean, we both … It’s entirely human augmentation: How do we take processes that have people and make people all more effective?

Ed Sim: Yeah. That’s a great, great point. I mean, I invested in GreenPlum back in the day solving the big data problem. Now, it’s part of the EMC and Pivotal. And, we coined the term, “smart data” five or six years ago, meaning you’ve got all this data … I mean, there’s three certainties in life. It’s taxes, death, and the growth of data. [Laughter]

So, when you go back to the applied AI and learning, that's kind of the next phase with that and the GPUs, and great neural network models, and cloud. I think it makes it more accessible to all of us from that perspective. So, when you look at any manual in the process, you know, including security: We invested in a company called Security Scorecard that allows you to look at the security posture of a company from the outside in. Using tons of data, we give classification, clustering, we give use regression analysis, and all that stuff to come up with scores on security. So, it can be anything from internal data entry, business process, but also externally you can use it for security as well. So, it's a pretty broad field.

Michael Krigsman: So, there are so many different areas and applications where patterns can emerge and technology can help discern those patterns. From a businessperson’s standpoint, how can they go about figuring for their own business, and their own processes, where it makes the most sense to apply these types of machine-intelligent technologies?

Sean Chou: Yeah. I think this is an area where, “You look really young, but I’m not going to make a call on your age.” But, certainly like Ed and I, being old guys in technology, I think we would say if you’re looking at this from a business user perspective, AI is just a shiny object, you know? If you’re looking at how to assess AI companies, I mean, stay focused on the business problem and realize that AI is a solution. I mean, when we set out to make Catalytic, we didn’t say, “Hey, we’re going to create an AI company.” We said, “Hey, we think the problem is that these processes are horrible, awful, and in spite of the technological advances that we have, they’re still horrible and awful, and we need to do something about that.” And sometimes, AI is the right answer. And sometimes, it’s not. So, we have AI for some things, but it’s not; certainly our solution doesn’t revolve around or hang its hat entirely around AI.

And I would say for the businessperson, they should have those things - that same mindset: What’s the problem; what value is going to be created; AI is just, you know, a new big, shiny tool that is hard to ignore.

The one caveat I’d put to that is, AI is … It opens the door to new category or problems that previously would have been almost intractable feeling. Just like trains and automobiles in the industrial age, now allow for low-cost nationwide distribution; AI is going to open up a new category of problems that we’re able to tackle that previously were just like, “We just can’t tackle those issues; [they’re] you know, unaddressable.” So, I think you’ll see a lot of those.

Michael Krigsman: But there’s a point at which I am now confused. So, on the one hand, you mention that AI is just a shiny object. And by the way, in doing that, you have just taken the wind out of the sails of half the technology marketing departments in the country, [and] in the world, okay?

Ed Sim & Keith Brisson: [Laughter]

Sean Chou: I’m going to be getting a lot hate on Twitter; a lot of hate tweets.

Michael Krigsman: But at the same time, you said that AI creates a whole new set of problems that can be solved or types of solutions to problems. And so, how do you reconcile the fact that on the one hand, it's just technology, and on the other hand, the implications are so profound?

Sean Chou: So, if you have just an AI solution out there, and it’s not solving anything, it’s just a shiny object that has no meaning. But, if there’s a new problem that you’re now able to solve as a result of AI, then to me, it’s kind of like what it does is it broadens the scope of types of things that technology can solve. So, in that sense, it is remarkable. But, at the end of the day, it still has to be solving the business problem; and I think one of the big dangers, whenever you have [...] things, is that people just start saying, “We’re an AI company,” but they ignore the business problem that they need to tackle. So, I guess I’m trying to speak out of both sides of my mouth here, but I think there’s truth to this; because on one hand, AI does now allow for solving things that we just would not have thought worth solving; when in fact, going back to what I was saying about it being a response to big data - like, big data: designers felt like we had reached a point where we just didn’t even know what to do with all of this data. So AI actually opens the door to being able to really significantly, meaningfully address terabytes of data that are coming in on a real-time basis. But, you know if you’re not doing that for a purpose, it’s pointless.

 

Michael Krigsman: So Keith, how do you, with your company and your customers, [...] go beyond the AI-as-a-shiny-object into AI that’s something that’s really useful in a practical sense?

Keith Brisson: Yes. So, what we’re 100% in agreement with [regarding] what Sean said [is] that ultimately, our goal as technology providers should not be providing technology. It should be to solve the business use-case. So, there happens to be the letters AI in our name, Init.ai, but we don’t generally talk about the technology. That’s kind of what we advertise when we’re talking to companies. We talk about the use-cases that we can help them be more efficient in, in the problems we can help them solve.

And, to Sean’s point, I kind of cringe when I see other companies really advertising the technology, focusing on the fact that they’re an AI company because it tends to mean that they’re not focused on solving your actual problems. So, what we always strive to message is how can we make your employees more efficient; how can we increase your revenue, decrease your cost, not what kind of neural network we’re deploying into our system; because that’s not really what matters at the end of the day. So, you know, for us, yeah we’re an AI-powered company, but we’re not an AI company. We’re solving business problems. So, does that answer your question?

Michael Krigsman: Yeah, and I think Ed, and certainly from what you were saying earlier, when you invest, you’re also investing in companies that are solving business problems, although they may be using AI to produce those solutions in a better way.

Ed Sim: Absolutely. And, AI’s like not a panacea in the sense that, you know, robots today or AI is not going to replace every human, right? So, the question is … You know, I’ve been very interested in human augmentation using AI so you do things that are better, faster and more efficient. I think that’s a tough sales proposition; the company comes in and says, “I’m going to replace everyone.” The question then is, you automate things, you do things 5x faster, connect faster, what else can those employees do? And that, to me, is more exciting so …

If I were a business, we talk to a lot of CIOs, I ask them, “Find out two or three pressing problems. What’s a low-hanging fruit inside of that business that can be partially automated?” Some of that can be automated data entry, right? Tons of people are doing that. “Look at anything that’s put offshore.” All that’s stuff that’s put offshore: you remember that big trend back in the day? A lot of that’s going to be replaced with AI, you know? And so, we think about that customer service, right? That’s another area that’s ripe for disruption as well. Accounting, AR and AP can gather that data, load it up, and try to automate.

So what we want at the end of the day is … I don’t care about the AI. If I’m a SaaS company or a software company, I’m going to look at the dashboard that the end-user uses, and “Give me answers.” Don’t give me, “Invest in beautiful data and visual screens.” I want to go to that screen and say, “Hey. These are the three companies you need to call today to see if you get paid.” And that’s what I want. I want answers. So, that’s what I want AI to do for me, not give me tons of dashboards, but give me an answer; make sense of that data, and give me something that I can really trust.

Keith Brisson: One thing to keep in mind, Michael, is that we’re part of this. You know, people are using the word “AI” as though the strong AI that Sean explained at the start exists today. The fact is it doesn’t, you know? We don’t have computers thinking like humans today, and we aren’t going to in the near-term, so we’re not going to be able to replace one-for-one a human with a computer. So what we need to do is take this technique, and apply them to specific verticals; to specific use-cases within them where we can see real value generated today.

So, AI as a term, I think that many people think about it aspirationally for where it’s going, and it is going there. And companies like ours and Sean’s are really going to help get us there in conjunction with the big guys where we’re doing some pure research. But today, we can take pieces of that; take pieces of what is going to get us to that point in the future and solve real business cases today. So, there’s a little bit of conflation of terms in this. You know, it’s a real buzzword, but there’s value there in the pieces and how they can be applied.

Michael Krigsman: We have an interesting question from Scott Weitzman on Twitter, who is asking, “What are the key elements that you can use to classify or differentiate AI on the spectrum from technology to business, in order to help businesses understand the nature of the kinds of problems that AI can solve?” In other words, how do you explain it well to businesses that they get?

Keith Brisson: You know, I think that we’ve certainly faced this challenge when we’re describing how we augment or replace humans in customer support, and you know, we’re very clear with companies that we’re not creating something that’s inimical to humans. What we’re doing is we’re making the business process more efficient, and really focusing on those particular segments where we can apply the technology. And I think it’s a matter of protecting the messages. So, if a company comes to you, and they’re telling you about the AI technology and not telling you how they’re going to help your business, that should raise a little red flag; raise an eyebrow for you and you look in a little bit more deeply.

So, I’d say when you’re trying to evaluate these solution, really consider what business process it’s trying to solve, and the ones that are appropriate for that are ones where there is a fair amount of data that is unstructured and it needs understanding around that. It could be language data; it could be financial data; it could be anything else.

Ed Sim: Yeah. I find it comes into two camps in enterprises, and just depends on the use case. It comes down to, "I can help you be smarter," i.e. there are predictive natures of AI, so whether there's a marketing dashboard, or just helping make decisions faster, so that's one aspect of a pitch. And then you can backfill that with whatever problem you're solving. And the other is, "I help you save a lot of time and money that you can free up your resources [with]." There are two different kinds of approaches. Predictive versus the general saving money aspect, and that's kind of the business problems of how AI helps you.

Michael Krigsman: Or to put it into broader terms, you have the efficiency aspects, and say, the innovation aspect?

Ed Sim: Sure.

Keith Brisson: Yeah.

Michael Krigsman: Let’s change gears here, and talk about when you’re thinking of designing your products, developing your products: Is it different when you start incorporating AI-related technologies from traditional software development? What are the differences? How do you think about it differently, design products differently, skills, and so forth?

Sean Chou: I think there is a difference, but it’s largely in the newness of AI technologies. If I were to replace the word “AI” and just say “database,” for example, when I think about a product, there’s no challenge in where a database lies, architecture, or solutions out of our product or the features that it can enable. AI, I can think of in the same way as a component that I can leverage, but it’s new. So, it’s kind of unlocking a set of features and a set of capabilities that we haven’t … like, patterns just aren’t as well established.

I think some are, and that's why among a lot of AI companies, there's a common set of things that are very common. Like, you'll see sentiment analysis, you'll see categorical processing a lot, you'll see machine learning a lot; these are patterns that have emerged as output, and so those are kind of a little bit - I don't want to say a "no brainer," but established patterns that product architects really use to leverage, whether they're engineering or thinking about a product management perspective. But, there are a lot more use cases that we haven't really quite figured out yet because of the newness of the technology.

Michael Krigsman: How …

Keith Brisson: [...]

Michael Krigsman: Oh, oh please. Please, go ahead, Keith.

Keith Brisson: Yeah. I want to just mention that it definitely requires a little bit of thinking, at least in terms of the implementation level, because there’s generally the requirement for data. If we’re using a solution provider, they can offer that data up-front to help you get started. It may not be a consideration, but data plays a role in any kind of application of AI and machine learning today, and if you talk with a solutions provider, that is going to be a concern, like how do you use data to make this thing more efficient?

And the other thing to consider is that these AI systems and machine learning systems, they evolve over time based on back-cycles, or at least they should. So, you can get an immediacy. The value that you get from these kinds of solutions up-front tends to increase over time, as they see more cases in their then-working back-cycles. So, it's a little bit different in that it's not static; it's something that's evolving over time.

Ed Sim: Michael, can I add one other piece? I think Keith makes a good point from the data itself, and if you're selling in enterprises, any type of solution that leverages their data I think a big question no matter how many folks talk about the cloud is, "Can you drop it on-prem?" And, I don't know if either you've been asked that question but, it seems to me that every company that we have leverages some type of data, and to run kind of AI on top of it is, "Can you drop it?" I can move the data, there are all these regulations, and I need the securest you guys can [provide].

Keith Brisson: Well, we get that all the time, especially since a lot of the innovation in AI has come from large companies that have a tendency to suck up data, you know, the Googles of the world, which most enterprises are not willing to trust with their valuable customer data. So, we get that all the time, but data is a requirement, right? So, our [plan] at least, is to provide our own data to help get things started. So, like in our case, it’s a general language understanding data, but then let companies maintain control over the data that’s specific to them. Our strategy on that is to offer on-premise or virtual cloud insulation so they can maintain control over that. The simple fact is most companies don’t want their data training other companies’ datasets.

Ed Sim: Yup.

Keith Brisson: … especially in regulated industries like healthcare and finance.

Sean Chou: Yeah. Data is definitely kind of the muck-side of AI that people like to not talk as much about, but it is really important, I think, for any company that's really looking at AI and what their AI strategy is going to be over time, which, I think, you know, we all are in agreement there should not be an AI strategy, there should be a business strategy powered and enabled by AI tech. But, you know, you have to get your data policy in order, so they definitely have to have a point of view on where data resides, how to own, and so-forth. And I don't think there's a right answer right now. Certainly, at that point, we get the questions to of, "Hey, where exactly is this data? Where does it reside?" It becomes a bigger problem once you start talking about a global context because then it's no longer just US privacy issues, you have entire governments that have a very strong perspective on where physically data should reside.

Michael Krigsman: Let me …

Ed Sim: We are a company for that, too. [Laughter]

Michael Krigsman: [Laughter]

Ed Sim: That’s probably the big idea, and we do use AI and machine learning to tell you where the data is, where it’s located, and we attach it to a unique ID before people kind of run scenarios on top of that, so…

Sean Chou: Got all your bases covered.

Ed Sim: [Laughter]

Michael Krigsman: That is clearly looking at all the different facets of this in order to cover, as you say, cover his bases. This issue of the data is so important, and would it be accurate or inaccurate to say that for companies that are thinking about AI, that it’s actually a data situation, rather than an AI technology situation or question? Is that accurate or inaccurate?

Ed Sim: I would say it’s pretty accurate, right? Because a couple things: I’m so bullish on the enterprise and leveraging machine learning in the enterprise, because once again, a lot of these folks don’t want to give their data to Google to train them, right? That’s Google’s model. Hey, I’ll give it to you for free, I’ll open-source it and [...] Facebook with that AI, but there is tons of value sitting inside of enterprises, right? And the more data you have, the better you can train, the better you can predict. So the question really is, I’m dropping something on-prem, and I’ve got healthcare data. That’s interesting. But somehow, I can tie that healthcare data anonymously with other healthcare data from other companies. Then you start creating a powerful data co-op and you can make better data predictions. But I think data is definitely a weapon, and it’s definitely a weapon in the enterprise space.

Keith Brisson: That’s definitely a weapon, but it’s not even all that you need. The technology and the skills required to process that data and turn it into something actionable are pretty technical and can be a distraction for most companies. So, for most companies, you can have the data. We also need the expertise to be able to transform it into something that’s actually usable within your business. And that’s something that tends to not make sense for companies to build themselves. I think that’s at least our premise is that we should provide that technology so that businesses can focus.

Michael Krigsman: That’s a really interesting … Oh, please, go ahead, Sean.

Sean Chou: I was just going to add, I think that the data is certainly essential for the majority of AI applications right now, and it's largely a function of where we are. When we narrow AI for machine learning, for developing neural networks, you need data. You need feedback to improve that neural network. But, as we start getting more and closer to more general AI, or even very well understood things like, you don't really need to train sediment analysis that large now because that's a more general application. So, you're already seeing some applications and outputs from general AI research that doesn't really need to be trained.

So, I think that that shift will occur over the next decade or two, where we right now need a ton of data, especially for domain-specific applications, and we're going to see more and more "general" general AI come out that won't need the training as much, or won't need as much data power.

Michael Krigsman: This whole …

Keith Brisson: I actually absolutely agree with that. Data is almost a crutch right now because the techniques around general AI have yet to be developed. So data is replacing the sophisticated techniques that are on the rise.

Michael Krigsman: It seems like it's quite a paradigm shift to some degree for people in the enterprise who are thinking about AI because they are tending to think through the lens of the technology, rather than - because it's so new - rather than necessarily thinking first and foremost about the business problem. And then the fact that so much of AI involves patterns and therefore requires that large volume of data, it requires a different way of thinking about how you go about solving problems. So there is an education process, I think, that is still having to take place, and the result of this is all of the hype in the software and technology industry because you can basically say whatever you want about AI. Everybody's got an AI, right? I've got an AI! [Laughter]

So, I was recently talking on this show with James Cham, who is with Bloomberg Beta, and he was saying we need a framework for making decisions inside the framework of the enterprise; investment decisions. So, Ed, let me toss it back to you. When you think about investment decisions that involve AI, what are some of the criteria and things that you think about?

Ed Sim: Well, I don’t try to make a specific technology decision on AI investing. I mean, first and foremost, we tend to be investing in technically-driven founders like Keith and Sean. So for us, it's deep domain expertise and understanding of the business problem they're solving. If they didn't come to me and say, "We're building an AI company," right, and they said, "We're solving this problem; it's a big, big opportunity," … And by the way, one of our core pieces of technology will be some use of AI, and then we fit it in through that a little bit. But that was kind of the main reason we fund it. So I think that's really, really important. It's just like every other tech trend out there. Back in the day, it was Java. You know, there were Java funds out there. Then it was everything with mobile. And now, it's AI.

But guess what? I view AI as just like water. I mean, it’s just like electricity. Every company in the next five years, every technology company is going to use some form of AI. And so, I don’t view that as separate [...] per-say. That’s why I say “applied AI.” What business problem you’re solving, and how you’re doing it 10x faster and 10x cheaper; if you do both, that’s an amazing order of magnitude improvement.

Sean Chou: Totally. Like is there a web company today, you know?

Ed Sim & Keith Brisson: [Laughter]

Sean Chou: But there were! I mean, ten years ago, or fifteen years ago, time flies. You know, fifteen, seventeen years ago, there were web companies. But there are no web companies today.

Michael Krigsman: Well, if you think about it, when Salesforce started, they said, “We’re doing this Salesforce automation over the web.

Sean Chou: Yup.

Michael Krigsman: … because it was new, and it was unique. And so, your feeling is that given some certain amount of time going forward, AI techniques will be incorporated just about everywhere, essentially?

Sean Chou: Yeah. I'm 100% convinced of that. And we're talking a lot about machine learning and human augmentation, but there's just a lot of simpler things like AI … Applying a lot of AI principles just makes a better product, because you are asking a person to do as it's thinking because the product can do more on behalf of the person. So, you're going to see a lot of very subtle importance about AI, because it's just better product. It just lowers the cognitive computing costs on the human's side.

Ed Sim: I love that Sean. I’m thinking that great AI is invisible to the end user. It just works! It’s really easy and it just works. There’s no friction involved, and I think that’s what great AI is. And that’s why Echo’s kind of cool. I mean, it doesn’t fully work all the time. You just talk to it and it works, most of the time.

Michael Krigsman: [Laughter]

Ed Sim: That’s what the great AI is. And that experience to bringing that to the enterprise, and the enterprise-level problems, that is pretty exciting for me. And no-one’s kind of doing that yet, and we hope to do bits and pieces of that with Catalytic and Init, but I think that’s a huge, huge opportunity.

Michael Krigsman: What are the …

Sean Chou: We work so hard for end-users to say, “Wait, what just happened there? [...]” Like, that doesn’t seem like it was really a lot, but they don’t appreciate how much work that’s going on behind the scenes to make this slow, magical moment happen.

Michael Krigsman: What are the inhibitors, or the obstacles that prevent this kind of adoption that Ed was just talking about in the enterprise? What needs to be in place to make it happen on a broader scale?

Keith Brisson: Personally, I just think it's not our time. You know, there are different companies tackling different components of it, and it's a matter of … a lot of it is integration, so as data becomes more connected between different parts of the enterprise, you enable new ways of taking these techniques and servicing value to people who [...] it.

So, to me, it’s just a matter of time; it’s a matter of these cool things being incorporated into the workflows where you’re not going to really see them.

Ed Sim: I would also say that there's an infrastructure answer to that, too. A lot of enterprises that we talked to; I'm sure we talked to Michael as well; are kind of re-platforming technologies, evaluating how to bring hybrid cloud into the enterprise. If you look at Pivotal, Pivotal just announces 72 million dollars last year from one-third of the Fortune 100 to help them think about infrastructure and cloud. So, once you have an agile kind of platform to build off of, that makes it that much easier to develop and deploy an AI plugin on top of their existing application. So, I think part of that too is just what the underlying infrastructure looks like.

Keith Brisson: Yeah. And, it’s getting there, you know? We talked with enterprises, too. A few years ago, all of their data was siloed in different systems with no interconnectivity. We talked to them today and they actually have internal APIs and ways of connecting data together which actually enable these kinds of applications. So, I think we’re right on the cusp of the point where enterprises have enough, or are getting enough connectivity between their different data stores, that we can [start] with applications really blossom into being every part of the enterprise.

Michael Krigsman: Keith raises a …

Sean Chou: I think one of the biggest inhibitors right now is just the lack of clarity as to what exactly AI is, because, on one hand, you have the extreme end of the spectrum that actually creates fear: the all-Terminator, Skynet-type of AI; all the way to, on the other end, where people have said, "Hey, I've just whipped together this thing, and it's AI," but a technologist will look at that and potentially say, "Well that's not really AI! You're just regular expression matching," right? So out of these extremes of what AI … what's being proposed and packaged as AI that lack clarity, and the buzziness of it all, actually makes it very hard for people to adopt, because they have to wade through all this crop to get to the real business value behind whatever the product is.

Michael Krigsman: Well, like I said before, it seems like virtually every technology company is selling an AI.

Sean Chou: Yeah!

Michael Krigsman: Hey, I have a couple of AI's. Do you want to buy one?

Sean Chou: Yeah! [Laughter] Yeah, yeah. That’s right.

Keith Brisson: Yes. You know, I think that probably the most extreme example is where they try to portray it as a single AI: this kind of all-knowing mind that can perform these superhuman tasks. But the fact is, generally what they have is a collection of machine learning-powered APIs or services that enable individual things, and they may sell it as this kind of super-knowing mind that’s going to form Skynet and Terminator, but the fact is, they’re not. And, as long as businesspeople recognize that those … that all-knowing mind may not exist, but the individual components can provide a real, concrete value, it shouldn’t be a reason to not adopt those technologies just because it’s not living up to that hype.

Sean Chou: Yeah. In fact, the AI industry, or people who have AI, almost do a great disservice by overhyping AI. And so, we try to, the minute we have a conversation with a customer or prospect, demystify what it is we do, specifically, and say, “This is very tactically how we use AI technology,” because we want to very quickly get to the value that we’re added, and not be kind of caught in this whirlwind of AI and this buzz of AI. I think if you let yourself get caught in that, and if you lead too much on it, you’re always going to disappoint people, because you’re not ever going to live up to the science fiction portrayal of AI, at least not for the next few years.

Michael Krigsman: Well clearly …

Ed Sim: The reality of it is, if you talk to enterprise buyers, they don’t say, “I need an AI;” I need AI, right? It’s like, “I have this problem, and if you can save me 30%, or do things faster, then it’s interesting to me. And if you leverage AI, very cool, but that’s not my checkbox,” right? And that’s not the checkbox item. So, I think we also need to think about the budgets: why they’re buying things and what ROI you’re providing. And in my mind, AI can help create incredible ROI, but you’ve got to apply it to something.

Michael Krigsman: We have just about four minutes left. And, maybe we can just go around the virtual room, as it were, and let me ask each of you for your advice to business buyers; to people in the enterprise, who are looking at these technologies and hearing about - again, every vendor is just hyping this to the max, whether there’s substance behind it or not. So, what advice do you have for people in the enterprise for sifting through the hype so that they’ll get something useful from AI? Who wants to start?

Sean Chou: Me. I can start very quickly. I would just say, stick with the basics, you know? Make the business case, look at the value that’s being created. AI, like I’m saying, is not a checkbox. It’s an enabler. Where you see people making claims about AI, I would say, you know, what’s the business value and if AI’s being used to dramatically multiply the business value, or if you can see incremental gains from any other non-AI types of solutions. I think AI’s going to evolve; what you’re looking for the potential to really move the needle. Not a 10% gain or a 20% gain. Maybe AI allows you to tackle new problems, or allows you to get 2, 3, 10x type of gains over a traditional solution.

Michael Krigsman: Yeah, that’s really interesting. What asking that core business question, “What are we doing; what’s the value; what are we trying to solve?”

Sean Chou: Yeah, absolutely. Stick to that.

Michael Krigsman: Keith, your thoughts on advice to folks in the enterprise who are hearing about this technology, and what should they do?

Keith Brisson: Yeah. My advice is to try to calm yourself down when you hear the hype. A lot of it is … It is hype, but there’s real value there. Like, this is a long-term trend that is just getting started, and it is going to transform the way business processes are tackled throughout the enterprise. And, really focusing on these media business use-cases like what Sean and Ed were saying is absolutely the right way to go. It’s not about technology, it’s about the business process. And, you know, I would encourage enterprises to be cautious about the hype, but really optimistic about what use-cases these can enable, because there are going to be entirely new categories like what Sean was saying, and there is real potential there to transform mass parts of enterprise workflows.

Michael Krigsman: So in other words, don’t buy into the hype, but focus on the real problems and practical solutions.

Keith Brisson: Absolutely! But, still be excited. It’s okay to be excited about where this is going because the long-term trend is there, and it is real. It’s just people are just a little ahead of it. So, that’s all.

Michael Krigsman: That’s a great point! And, finally, Ed Sim, your thoughts. You’ve been involved in folks in the enterprise for a long time. What’s your advice for people who are hearing the hype and trying to figure out what to do?

Ed Sim: If someone is selling you AI snake oil and that’s your initial pitch, run for the hills!

Sean Chou & Keith Brisson: [Laughter]

Michael Krigsman: [Laughter]

Ed Sim: [Laughter] The reality of it is, is that for example, you solve a business problem and think about what problem you have, and AI and some form of AI can help you solve that problem and help you do it much faster, and there are tons of companies in every category leveraging AI. So, really make sure you talk with a few different folks, whether it's large companies or startups - hopefully, startups, because I'm …

Keith Brisson: Agreed.

Ed Sim: And, the second thing is, as far as new categories, I mean, Security Scorecard, for example, created a security ratings market overnight, and that wouldn’t have been enabled without leveraging neural network technology, machine learning, rules-based classification - all that stuff - but they’re not going in and saying, “We’re an AI security company,” they’re going in and saying, “Hey, I’m going to help you figure out which third-party vendors that you have, that have not kind of attached their security systems, and might be risky for your company.” AI and machine learning never gets brought up. So, I think the companies that are the best ones are the ones that actually help to solve a problem, and they actually have amazing, amazing technology, but they’re not pitching that as a first kind of entry point.

Michael Krigsman: Great advice! Clearly, there is a unanimous decision here that the way to go is you solve the business problems and find technologies that will enable those problems to be solved in very dramatically better ways.

Everybody, thank you so much for watching! You have been watching Episode #222 of CxOTalk. Our guests today have been Ed Sim from Boldstart Ventures, Sean Chou, from Catalytic, and Keith Brisson, from Init.ai. I'm Michael Krigsman and tune in again next week. You can see all of our upcoming shows at CxOTalk.com/episodes. Thanks so much. Bye-bye everybody!

Chief Digital Officer: Lessons from a Former CIO

Christian Anschuetz, Chief Digital Officer, UL
Christian Anschuetz
Chief Digital Officer
UL
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

With the Chief Information Officer role in transition, business expectations of the CIO have also changed. In this episode, we talk with a seasoned CIO, Christian Anschuetz, who left that position to become Chief Digital Officer of Underwriters Laboratories. The discussion explores the Chief Digital Officer role and offers advice to both CIOs and their organizations.

Christian Anschuetz is the Chief Digital Officer at Underwriters Laboratories. He has been the Chief Information Officer of Underwriters Laboratories since November 2008. Mr. Anschuetz is responsible to establish IT strategies, goals and priorities and to provide senior leadership on key technology initiatives in the areas of enterprise resource planning, business process automation, computer systems validation, and electronic communications. Mr. Anschuetz served as the Chief Information Officer and Executive Vice President of Americas at Publicis Groupe SA, where he was responsible for the strategic management and delivery of IT support to over 17,000 associates in more than 100 unique lines of business. Prior to Publicis, Mr. Anschuetz served as Vice President and Director of Operations at BCom3. He began his professional career in a broad range of progressive management roles these included; Senior Consultant and Information Security Thought Leader for Sprint Paranet, and Senior Partner/Founder of UpTyme Consulting. He holds a Bachelor's Degree in Economics from the University of Michigan, Ann Arbor, and a Bachelor's Degree in Computer Information Systems from Strayer University. He was a decorated United States Marine Corps officer and a veteran of the First Gulf War.

Download Podcast

Chief Digital Officer: Lessons from a Former CIO

Michael Krigsman: Welcome to Episode #223 of CxOTalk. I’m Michael Krigsman, and I am your host. I’m an industry analyst, and we have a really interesting show. We are going to be talking about the role of the Chief Digital Officer, and our guest, Christian Anschuetz, works for a company called UL that everybody knows under the name “Underwriters Laboratories.” So, I have to imagine that having been founded in 1884, the company is different today than it was way back then.

Christian Anschuetz: Oh, it is so different than it was back in 1894. Hugely diversified, it is now a global leader. We're in over a hundred countries worldwide, thirteen thousand people to this day; it's a fantastic company with just a super, absolutely superb mission, which again is all about safety, safer living, [a] safer working and living environment.

Michael Krigsman: So, you were the CIO at UL for many years. And, then you transitioned recently into the Chief Digital Officer role. So, let’s begin by talking about that CIO role. So, what was your mandate as the CIO?

Christian Anschuetz: Well, I think I was just like, you know, every CIO. My job was to help create a contemporary technology, a platform if you will, that would allow the company to be successful in the marketplace.

Michael Krigsman: And, what are some of the challenges that you face? I mean, it’s a really tough job. And, I’ve seen you talk a lot about the role of IT in terms of supporting innovation at the company. So, I think that’s a particularly interesting aspect as well.

Christian Anschuetz: Well, you know, I think that everybody has a role in the space of innovation. And, I definitely think that technology, whether you’re in IT or in a line of business that’s associated with technology, you have to lead from there, because you simply are already in that cutting-edge space. And, I think we’re uniquely positioned as leaders in technology to be aware of new and emerging trends, and take advantage of them for our respective businesses.

Michael Krigsman: But, I guess, you know, the challenge that many CIOs face is bringing innovation back inside the organization, and getting out of just supplying the infrastructure, right? And, people use the buzzword “becoming a partner with the business.” So, maybe we can kind of explore what that is, and how do you go about doing that?

Christian Anschuetz: Umm, yeah. So, maybe you’ve got to bring innovation in. You know, I’m a firm believer in the idea of cross-pollination. I think that you really have to innovate by creating a […] so, you really have to spend about two-thirds of your time outside of your comfort zone, meaning outside of your industry. You learn from what others are doing and find connection points. And then, innovate through … understanding what others are doing, and bringing those into your industry, bringing those into your company. Otherwise, what you end up have to happen, Michael, is potentially having … So we see this all the time, right? It's an industry of "me too." If all you're following is the same players in your market, the same players in your industry, you're going to keep doing what the same industry is doing. And, how innovative is that? Or, is it perhaps more interesting to bring something from outside the industry altogether, and create something altogether new? Maybe, the new category takes you and your firm outside of your niche.

Michael Krigsman: That’s a very interesting point. I guess the question, then, becomes how do you do that? I mean, do you talk with startups? How do you bring external innovation ideas inside, and especially into IT in a way that will affect the broader business outside of IT?

Christian Anschuetz: So, is the question how do you do that?

Michael Krigsman: Yeah.

Christian Anschuetz: That’s, you know, that’s kind of the magic of it. Well, you know, I think so much of it comes down to a fundamental leadership conversation, right? So, first of all, you’ve got to lead by example. You have to be able to do that yourself. You have to be willing to be really uncomfortable, right? And push yourself in these new and different areas and hopefully inspire people to do the same.

When you bring these different ideas in, you have to hopefully make the connections and show that in these intersections, in these different things that you can possibly do with the business, you can maybe create an inspiring vision that [could] have people go, “Wow! This is fantastic! This is something I want to be a part of!” I guess the point of what I’m trying to make, Michael, is you can’t tell people what to do in this space, but you can inspire them to want to be innovative. You can inspire them to want to look outside their comfort zone, you can inspire them to want to look up outside […].

Michael Krigsman: And so, can you give some examples from your experience at UL of how you did this? I know it’s a leadership issue, as you were describing, but I think it’s one that many people find very difficult, or there would be more of it.

Christian Anschuetz: Uhh, yeah. I think it is very difficult, and I think; well, let’s talk first about the last part; we said that there would be more of it. You know, what’s your impression, Michael? Are most firms struggling with disrupting themselves, even though it’s obvious that all firms are going to be disrupted?

Michael Krigsman: I mean, is that a setup question? I think disrupting oneself, whether it's a … Look, as people, it's hard to disrupt and rethink how we are, what we do, how to improve ourselves, and companies are comprised of people. So, absolutely it's very difficult for most companies, and very few companies are actually disrupting themselves. I think that's really hard.

Christian Anschuetz: Yeah, well why is that, do you think?

Michael Krigsman: Hmm. The tables are turned. The interviewee becomes the interviewer. Again, I think the reason is that it’s easier to stay stuck doing what we know. So, in business terms, we have sources of revenue. And, we have processes. And, we don’t want to risk upending or disrupting those sources of revenue. So, we tend to do that which we’ve done before, which we know has worked in the past.

Christian Anschuetz: Yeah. Michael, I think you’re exactly right. And, I’d add another dimension to it, actually. And it goes back to what you were saying about businesses not wanting to disrupt the revenue streams, or disrupt their current models. I think there’s another part to it, too. Another part is that I don’t think people want to disrupt themselves. And you know, when it comes right down to it, we can talk about IT and digital and everything, and you know all day and all night, and think about it in terms of technology, but in the end, it really does come down to people. Just because it’s digital doesn’t mean we take people out of the equation. In fact, digital is actually more powerful when you consider people as part of the equation.

What the reality is, is I think that most people struggle with disrupting themselves. I mean, change is hard. I mean, you know, there's a reason we call growing pains "pains," right? Because it's hard to grow into new and different areas. And so, I think it's really important for us to tend to the wants and the needs and perspectives of the people that we're affecting when we're having these conversations in order to really help bring in these innovations into these disruptions and make them really disruptions that are opportunities as opposed to disruptions that are perceived as distractions.

Michael Krigsman: So, you're saying that the key is to engage the people who are quote-on-quote going to be disruptive or disrupted, in order to make them part of the change process.

Christian Anschuetz: Yeah. I think the key is actually to look at them as less of people that are going to be disrupted, and more people that are going to then actually become disruptors themselves. They’re going to become part of the disruption. Umm, you know, at least that’s the perspective of a firm that’s trying to disrupt itself.

Michael Krigsman: And is that what … Is UL trying to disrupt itself?

Christian Anschuetz: Yeah, of course. Well, we definitely are. We're a hundred and twenty-year-old firm that likes to think of itself as a hundred and twenty-year-old startup. And we do want to disrupt ourselves. Yeah, that’s right.

Michael Krigsman: Well, I guess for a firm … any firm that’s been in business for a hundred and twenty years has gone through many changes. And so, can you elaborate right now on what are … What is the focus of that disruption at UL?

Christian Anschuetz: Well, you know, UL is just a fantastic company. I think you have to understand a little bit about us and let's just start with the "why," again. And so, the purpose of UL is … Our mission and our purpose is to make safer, more sustainable, and more secure … Well, [a] safer, more secure, more sustainable world. It’s a mission for humanity, right? And, we've accomplished that mission in the past by helping organizations test products to meet standards. Standards are … sometimes we write, and sometimes there are standards where we help participate in their development. And when a product passes the standard, that means that product is safe, it's sustainable, it is whatever … It's over the threshold for whatever reason that standard exists. And in many cases in our tradition business, that's about safety, right?

And yet, the thing that’s fascinating about us is that our mission is something other than testing. Our mission is about safety, sustainability, and security. And, nowhere in that mission statement does it say we just test. And, it’s very interesting, because the one thing that this company has, and that is very unique, and so we are a leader in the trust industry. We are trusted, we’re a third party, we’re hugely independent. Our integrity we hold incredibly dear.

And then, a firm's that know of us, and this is so many of them. Over seventy thousand manufacturers worldwide. That's our customer base. And, they know this about us. And, when we have the opportunity to engage in dialogue with them and say, "You know, what are the real, deep problems that you're trying to solve?" It often is bigger than simply testing their products out and help them get to market. There are way bigger opportunities for us to perhaps pursue. And, we're disrupting ourselves by thinking about ourselves in pursuing these higher order problems, as opposed to just the transactional testing activities that we do.

We’re a leader in science research. We spend more on r&d, at least to our knowledge, than anybody else in our industry. And, we are constantly figuring out and learning about these new and emerging technologies and all the while figuring out how we can maybe disrupt the status-quo as we learn more about everything from, you know, new and emerging alternative power sources, EV, hack for that case drones, new app trays and forays in cybersecurity. I mean, what makes the world safe today is very, very different than what made the world safe in the past.

Michael Krigsman: That’s quite interesting. So, your underlying mission remains constant: safety, security, sustainability; that trust that you were talking about. Your underlying mission remains constant. However, the way that you, can we say, deliver that mission; that’s the thing that changes and is disruptive. Is that an accurate way of saying it?

Christian Anschuetz: That’s wholly accurate. And, you know, that’s what’s beautiful about our mission, Michael. If you think about our mission, it’s really not bound by a lot, right? I mean, making the world safer, more sustainable, and more secure, that gives us a lot of room to maneuver, right? And in that maneuvering, it’s helpful [sic] we can maybe reinvent ourselves.

Michael Krigsman: Yeah. That's a very interesting way to think about it. I think many companies don't have that sense of constancy or consistency about their core mission. And so, the disruption becomes a more complete type of change. But, it sounds … But, so you have that constant mission and when you, therefore, are thinking about disruption, the execution, the delivery of that mission, how do you then go about it? How do you then think about that transition, that transformation?

Christian Anschuetz: That’s a big question. So …

Michael Krigsman: Yeah. It’s tough. These are tough questions.

Christian Anschuetz: Yeah, they’re really tough in that, you know … It depends on what we are … Let’s just speak in the abstract; let’s talk about any firm. It depends on, I think, what the firm’s trying to transition or transform itself into, right? And, I think that is, you know … I’m a big believer in “Start with why.” Our “why” is clear. Again, our “why” is a mission for humanity, you know. What we do then, and how we do it, is sort of that order. So you start with “why,” you go to “what are we trying to do,” and then we determine about how exactly we do that. So, it kind of depends on the “what” a firm is trying to disrupt themselves, and transform themselves into before you can probably, at least before I could […], perhaps say how you might go about doing it.

But, I want to circle back to a previous comment and part of our discussion beforehand. You know, so much of this has to do with, again, people, right? We have to be absolutely deliberate and focused on making sure we bring people along for the ride. It’s so, so critical, Michael. And, I will tell you: if you were to ask me some of the differences between like a traditional CIO or maybe a CDO role, they’re both important roles and certainly, one is not better than the other. They’re just different, right?

I think that CIO role is really more typically, typically more about internal, you know, transformation, efficiency; can be in a contemporary firm, internally. A CDO role has, you know, has to trust that a lot of that is happening internally and then project it externally, and bring the customers in. So I think the CDO role is typically, typically more of an externally-facing role. But regardless, when we are affecting like the transformation either within your firm, or you’re trying to create new values outside the firm, you really need to be considering people all along the way.

With regards to the CDO, because it may have a tendency to have an external impact which we change the internal dynamics and how the company sees itself, maybe even how – definitely how it runs itself, right? How it actually delivers this new value, start these new things.

The scope of the responsibilities tend bigger, right? So, one is internal, and one is maybe more external, at least in this definition, right? And, but the CDO role is really all-encompassing, at least in my opinion. And, you know, this is where the soft skills become even more important […] because you really are responsible for changing the external perspective on […], and then you have to change the internal perspective, perhaps, on exactly what the firm does to the value that it creates.

And so, again, I’ll go back to what I think the CDO role [is], and you actually manage transformations really involve people and organizational change management. It’s that saying – I’m stealing it from a contemporary of mine that said that, you know, “The hard results you get are really coming from the soft skills.” And I do believe that’s true for the CDO role. Both roles. All these leadership roles, for sure, but definitely the CDO role.

Michael Krigsman: So, in practical terms, how is your role; how is your work as Chief Digital Officer different from what you did and what you focused on as Chief Information Officer?

Christian Anschuetz: Well, it kind of follows that same path that I was just on. I mean, the CIO role is really much more internally focused around internal operations, and the CDO role is much more of a customer-facing, customer-discovery, customer-exploration role. Again, going in front of customers and saying, “Okay.” You know, what are the really big problems that you’re trying to solve? And doing this out of the context of how they normally see you as the firm. Remember, relationships are contextual, right? So if you and I only know each other in a certain context, and we keep talking about the opportunities to work together in new and different ways, it will always be influenced by the context in which we know each other. Is that a fair thing to say?

Michael Krigsman: Yes, of course.

Christian Anschuetz: Well, when you want to go into these customers, and you want to discover these bigger opportunities, you have to first pull yourself out of that context that you’re known for, and probably talk to someone that’s different from that customer, it doesn’t have that same context. I mean, the day-to-day context of how they do business with us today.

Now, this is why, you know, for the company now, I’ve been speaking in generalities, the company that I’m with now, UL, [has been] talking about … With the permissions that we have in terms of this leader and the trust industry, and this independence, high-integrity firm, we have the opportunity and the latitude, in so many cases, to move outside of typical interactions we have with our customers and engage in different ways; simply because we carry those traits with us. We’re the […]. And so, then we can engage in a different conversation and start having explorations around different, perhaps even bigger problems that we can solve for them. And, again, perfectly in conjunction and support of our mission and our purpose.

Michael Krigsman: So, when you talk about, again, going back to this consistency of mission and purpose, to what extent is this change and disruption affecting your underlying business model and the operations of UL?

Christian Anschuetz: Well, I think that has yet to be seen, Michael. I mean, we’re a relatively – I’m relatively new into this role, and you know, that said, the company has been working to improve itself and diversify itself in accordance with our customer needs for a long period of time. We had a very big disruption for any firm. You know, I sometimes wonder, I mean: When GE was, you know … decided to go to GE digital and really kind of create this industrial internet, this Predix platform and all that, when did they know that’s what they’re going to do?

Michael Krigsman: Yeah, what an interesting question. I mean, I think to … We’ve had a few people from GE on this show. We’ve had Ganesh Bell, who is the Chief Digital Officer for GE Power and Water – they have a different name, I think. And we had Linda Boff, who is GE’s Chief Marketing Officer. And, I think it became apparent to them that the market was changing, and GE needed to have a different kind of relationship with their customers. And so, they then re-thought, “Okay, what kind of technology platforms are they using? What is their business model? How are they selling? How are they pricing?”

And so, for example, instead of selling you a jet engine, they’ll … They own the jet engine, and they’re essentially licensing that jet engine to you, and you can pay on the basis of usage, obviously.

Christian Anschuetz: Jet engines, who would have thought. Right?

Michael Krigsman: Exactly. So, the question of how do you recognize when it’s time to change. I mean, at UL … And I want to remind everybody that we’re talking with Christian Anschuetz, who is the Chief Digital Officer of UL. And, I think everybody knows UL by the name “Underwriter Laboratories,” which was their original name before rebranding. And so, how do you, at UL, […] recognize, and when did you, and what are the signs that say, “Hey, we need to do something different?” It’s a really tough; it’s a really interesting question.

Christan Anschuetz: Yeah, and it's a tough question. I'm not sure if I can put exactly my finger on it and give your audience, your esteemed audience a really great answer. We do know that there is a need for us … our entire industry knows that we're in a position where we can be potentially disruptive, right? And the question is without knowing exactly what that disruption will be, there is a very simple question, and it's one that hopefully all companies, and all leaders are asking themselves: "Do we want to be the disruptor of ourselves, or do we want to sit by, sit back, and wait until someone disrupts us and then moves the initiative?" And, I think we … You know, UL I can speak for specifically, in this case, we want to keep that initiative. Now, why give up that initiative when we can own it?

Now, exactly what that disruption’s going to look like, exactly what will happen; we aren’t sure. Yet, we do know that the only way we’re going to seize the initiative is to act and to do something. And something is? Michael, hopefully someday we’ll talk and we’ll go, “Wow! That was crazy a year ago or two years ago.” Whatever it was. “How did you know you were going to get here,” and you know, we’ll probably reflect back and say, “Well actually, we didn’t, and here are the series of milestones we get,” and then suddenly, “This is the epiphany was this is what we’re going to do and this is how we’re going to change,” and create and entirely new category of business. Something out of what is our traditional industry which is TIC, testing, inspection, and certification.

Michael Krigsman: Well, it’s definitely not a straight line.

We have a few questions from Twitter. So, let's jump on those because they're pretty interesting. So …

Christian Anschuetz: To the best of my ability.

Michael Krigsman: To begin, Arsalan Khan asks, “It sounds like, to some extent, the CDO role is like a consultant to external clients.” I’m sure it’s not a consulting role, but in fact, there’s probably an element of that.

Christian Anschuetz: It’s actually a really great comment. And I think, you know, maybe I would have been pretty far from using the word “consultant,” just the way I think of that word sometimes. Umm, I do think there’s something to that statement, though, because one of the things we have to do that goes back to the whole context thing – I think one of the things we have to do when we’re talking to our customers, when we’re really thinking about the businesses we want to be in and the problems, the key, the problems we want to solve; we can recursively ask “why,” right? Keep asking, “Why are you doing this? Why are you having this problem?” I know “why” is a personal word. You know, “What makes this an issue for you,” until you finally get to, you know, the root cause; you know, the root problems that, you know, the company’s real customer base is experiencing.

You know, our perspective. They engage us for many, many different things. UL's a hugely diversified company and very different than it was a number of years ago. The core of our business is still we test the product against standards and when they pass, we help issue a mark, we tell the agency we're testing for that it met the performance criteria, whatever, right? But when you start asking, "Why do you need the tests," and "What makes you require this certification," until you keep asking for […] It's the organization's turn to try understanding that there's just a general lack of understanding with regards to firms of what they have to do to really, to safely, in accordance with compliance and regulations, put their products in a specific market, right? And testing is a byproduct. That comes down to the “how” you actually do it.

But, you could wind back and keep asking why until you get to the whole … a totally different problem statement that if you attack the “there” or the “why,” then what would you do today could be, you know, it could be relevant; it could be relevant in a different way. I suppose it could be rendered […] and I think that’s unlikely in this particular scenario. But, I think there are the things we can resolve, but you have to …

The consulting question is good, because you have to go in there, and you have to do essentially customer discovery sessions. What are the real pain-points? Other than the context that they know you and that you know them?

Michael Krigsman: And, Arsalan Khan has a very interesting follow-up to the point that you were just making. And he says, “So, yes, it’s good to know… We have to know customer pains and their concerns, and so forth. But, if we only listen to our customers, then Ford would have made just faster horses, not cars.”

Christian Anschuetz: Well, that comes down to the whole design theory, right? You can go and you can listen to just what they say and that’s the Ford story, you know, “Instead of building a car, would they have built a faster horse?” But, what the customer’s really saying when you recursively ask “why” enough is that they actually needed to get from point A to point B faster. They had to do it without a certain amount of maintenance. They didn’t like using; I’m totally making this up [laughter]; they didn’t want to wagons, they needed something with a certain amount of capacity. They didn’t want to sit side-by-side with somebody. In other words, the question might have been more about, or the challenge might have been more about diversity in mobility than it would have been about a faster horse. And if you listened enough, you might have heard something different than a faster horse, too.

I totally get where that statement’s coming from now. I mean, I get it, and I believe in that. But, I think when you listen to them, you have to listen to what they say, you have to really understand what they mean. Those can be two different things.

Michael Krigsman: That’s a key point. So, it’s not just listening to the words, but it’s trying to divine being empathetic, I guess you could say; being empathetic to what do they really want? Listening recursively, as you were describing earlier?

Christian Anschuetz: Yeah. What do they want, and what do they really need? And if you look at some of the best disruptions, I mean, you’re talking about things that people didn’t even know that they wanted. I love the example of Uber. I know it’s kind of tired in so many ways, but just think about it. People just always took for granted that you had to stand sort of dangerously close to the curb and wave your hand waiting for a cab, and by God, hopefully, it wasn’t rush hour, or it wasn’t raining, you know? Or otherwise, you were kind of out of luck. And that, though, was just the way it was, right? Of course that’s just the way it is, it’s how the business works, that’s how … We got rides from Point A to Point B until someone said, “Wow! You know, there’s another need there.”

And actually, did they have to ask the customers or did they just have to observe? And, I think that's observation is key, and that kind of goes to that second thing. You can listen to what they say, but you've got to really follow the meaning. And, the meaning can be divined by any number of different ways, but observation is certainly one of them. I think it's probably the key one.

 Remember, most of what we get from people is less about the words they say, it’s about how they say ‘em, right? It’s the nonverbal cues. And then just if you believe that, right? […] And there’s all the science to back that, that makes it very clear. If you back that, and you really kind of add, then, the sort of subtle, nuanced, observation piece and you say you observe their behavior, well that’s when you get into design thinking and you start understanding why some companies are just better at disrupting than others. They do more than just listen to words.

Michael Krigsman: It’s quite interesting: design thinking as a systematic means to do that kind of deep listening that you’re describing in order to get to the surface of what the customer ultimately really cares about.

Christian Anschuetz: Yes.

Michael Krigsman: We have another interesting question from Twitter. Marc Orelen asks a burning question that I think is on all of our minds, which is: Why do we need a Chief Digital Officer? Why are these … Why is the CIO and CDO role separate? And he says, “wouldn’t the ideal be a customer-focused CIO?”

Christian Anschuetz: I think that’s a great question and a great point. So, you know, it’s so funny. I got the CDO role just a short while ago. I’ve been operating in the capacity for a while as the CDO. But, I’m still the CIO. So, what’s the difference, right? No sooner than I got the role that I stumbled on an article by Forbes. It was January … It was this year, I think, in January. Forbes was saying, “Say goodbye to the CDO role.” And I read it, and I’m like, “Wow. That stinks. I just got the job.” [Laughter]

But the point of it was, and it was a really good point, is that if firms stop thinking about there being business strategy and digital strategy, and it’s just a contemporary strategy and the businesses are run with a very contemporary mindset, and it’s very agile around technology; it’s very inclusive of people and their involvement in technology, then you don’t need a CDO.

Michael Krigsman: So, I’ve heard people say that eventually, the CDO role may go away as the digital mindset, the digital understanding, kind of defuse through an organization; that the CDO role, we could say, is a transitionary role.

Christian Anschuetz: I think that’s right, you know? And I’m less than, I’d say, some sort of expert in this. I do think that’s right, though. But, let’s be honest with ourselves, and look about at the firms that we all know. And I’m speaking in general here. I think that having a CDO role in a company like; I’m just picking an example; like a Google, for example, probably makes a little less sense than a company like, say, maybe like a Ford Motor company, right? Both fantastic companies; and by the way, I drive Fords; love Fords.

But, you know, I think that there is this transition, as you said, and as firms … Firms don’t just overnight become this sort of digital entity, right? It took Ford a while to understand that they didn’t just do cars; that they did mobility. And then understanding what it takes to be mobile players in the digital world is still something that they’re embarking on. And so, having a CDO role that is sort of ushering in that understanding, this sort of contemporary culture, this contemporary understanding, this contemporary application to their business I think takes a certain amount of time.

So, counter to the Forbes article, which said "Say goodbye to the CDO role," was another article by McKinsey that talked about the CDO as a transformer-in-chief. And, you know, I prefer the latter article to the former. By the way, they're both great articles. But I think that's why you actually need the CDO role, at least right now, because I think we're in a state of massive transformation. And again, every industry is going to get disrupted and since we're all rather unclear as to how we do it; I mean, the very basis of why we're having this conversation, the questions you're asking. How are you going to know? How are you disrupting yourself? What are you doing about it? Because most of these questions are very difficult to answer for most firms. I think that's why the CDO role exists.

Michael Krigsman: Well, as you said earlier, it’s very difficult to disrupt ourselves as individuals, and it’s very difficult to make the changes needed to disrupt ourselves as companies.

We have another really interesting and, I think a pretty deep question, actually, from Sal Rasa, who says, “Is the CDO role a community relationship responsibility, a community relationship management responsibility, designed to inform change management decisions?”

Christian Anschuetz: I think that’s a big part of it. I really do. I go back to the statement about the people, and not leaving the people behind. That is all about change management, and I think that that is a really big part of it. Now, that said, there is an external portion of it that goes back to these adjacencies that we talked about. You have to be bringing the people on, but you also have to be an explorer, and you have to be utterly unafraid to go into new and different areas.

Jeff Bezos, I love one of his quotes, and he’s a very quotable person, right? He made a comment that’s a quote, and I think I’m attributing this properly. If I’m wrong, I apologize, but he said that “At Amazon, we're not afraid to be misunderstood." And, I think what's behind that quote is that they are okay to go in new and different areas, and have a lot of people scratch their head and go, "Why the heck are they doing that?" But they're doing it as part of their exploration. Now Louis and Clark didn't make a beeline directly from the east to the west. It wasn't a perfectly straight line and we made that comment earlier, right? You know, a lot of people that I'd say, "Well why did they scale that mountain?" Well they actually didn't know they had a choice, or it looked particularly great, or perhaps, it gave a whole new vantage and a whole set of opportunities that lay beyond it.

I think that there’s people aspect to the CDO role, I think that’s critical, I think this exploration portion of it, and bringing the people along in that exploration; again, making them potential disruptors themselves is actually very, very critical.

Michael Krigsman: […]

Christian Anschuetz: And yet again, another really [good point], you do have … I remember when we were starting this conversation, you said that "Christian, just think we're going to be sitting here talking around a table with a bunch of very, very smart people." You're making that comment, and clearly, the audience and the questions they're asking is making your statement very, very true.

Michael Krigsman: Oh yeah. Now the audience of CxOTalk is quite amazing.

Now we have another really interesting comment from Shelly Lucas, who is with Dun and Bradstreet. And, she makes the comment that she thinks you are ahead of your time as a Chief Digital Officer because many digital leaders are focusing on the science rather than the people on the culture. And I interpret that to mean not just the science, but focusing on the technology platforms that enable this, as opposed to the people in cultural issues.

Christian Anschuetz: Well, thank you. You said it was Shelly?

Michael Krigsman: Shelly.

Christian Anschuetz: Well, thank you, Shelly. That's very kind. You know, I was in IT long enough to know, I mean, IT could implement the best system, and you fail to get the people on board with it, and you're going to have an adoption issue, you're going to have, well, we all know the stories, right? You can implement the best system and … By the way, a little IT joke: How do you make people love their old system? Implement a new one, right? And that’s because if you fail to bring [Laughter]… It’s true! It’s so true. It’s a joke, but it’s totally true.

Michael Krigsman: [Laughter]

Christian Anschuetz: Ummm.

Michael Krigsman: Spoken by somebody with a long history in IT. I’m sorry, I didn’t mean to interrupt. Please, go ahead.

Christian Anschuetz: But it’s totally true, and you know, so I learned at a relatively young age, and I’ve been trying to get better at it, and it is a bit of a struggle. But I’ve learned that you can only get down the path as far as you want to go when you have a lot of people in support. So, you’ve got to bring them along. And I go back to this topic of leadership at the end, but what is the obligation of leaders but to create a compelling vision and inspire people to fulfill that vision? And if you are unable to do that, then how would you ever really help to disrupt yourself and disrupt an entire industry? Because you're not going to disrupt it with just technology. You're only going to disrupt it with your people plus some technology.

Michael Krigsman: So the technol- … I mean, the way I talk about it often is the technology provides enabling capabilities, right? It lets you do things that you couldn’t have done before. So, for example, a software platform that lets you collect data. Well, you need, if you’re a digital company, you’re going to be relying on lots of data. Merely having that technology platform doesn’t mean that anybody is going to use it or do anything valuable with it.

Christian Anschuetz: You’re a wise man. That’s exactly right. How many great technologies were just simply the wrong technologies even though they were perfect, but they came out too soon. They came out too soon, so they were still wrong, right? And so, you know, was it because the technology was at fault? Was it because society or the audience was unready for it, or was it a combination of the two where the technology was right for too little time spent in making the audience understand why this was actually, you know, a really great value. I think there are probably a bunch of different answers depending on use-case to look at.

Michael Krigsman: So, how do you convince the organization that change that it needs to undertake; this kind of change; and then, can we go back to UL specifically and talk about the nature of this change process at UL?

Christian Anschuetz: Uhhh, sure. So, what’s the question you kind of want me to dial in on there? Is it change process specifically you want?

Michael Krigsman: Well, I think the … And by the way, we have about five minutes left, so as we wind down, what are the lessons or the takeaways about driving disruption; self-disruption; disrupting your own organization? How do you even begin?

Christian Anschuetz: Well, Michael, I think you begin and you might be surprised to hear this from a company that prides itself in integrity and independence. It starts with transparency. You know, we ask our colleagues and they’re getting better at this, and we’re just really kind of starting off. Our colleagues, you know, what are the directions that they think we should go? What is the company that we can, and we should be? Again, unconstrained by anything other than our unique mission and purpose; again, […] for living and working environments, right? And our imagination. What could this company be? Getting them involved. I’ll tell you that’s what I think is one of the most key things I can do. Again, I know it’s soft, it has very little to do with inventing some whiz-bang, high-tech solution, but it’s been an important lesson for us, I think, is to involve our staff.

I think the other thing is, again, that thing we talked about already which is changing the context of our conversations with our customers. They know us in a certain context, they give us permissions to have different conversations with them than we traditionally do, so seizing those permissions, having a different power station, and really try and find the sort of root of desire, or the problems that plague them. And, that you have the opportunity to help them address and create new value for them and that portion of the company […]

Michael Krigsman: What about the role of senior management? You know, you’re talking about the grassroots side, but don’t you have to also go from the top down as well?

Christian Anschuetz: Well, you know, again, the senior management, that leadership, it's the vision, it's inspiring people to follow that, and then, of course, there's modeling, right? There's an old … You know, I was in the Marine Corps, and the Corps taught you a lot about leadership and this concept of leading by example. And allowing yourself to be less than perfect; allowing yourself to fail and even celebrating this failure, so getting a management team on board is saying, "Hey, we're going to explore," and some of our exploration – perhaps even the majority of our explorations – are going to end in dead ends. Being accepting of that I think is critical, because that unfetters your organization. It makes them less scared to move in those areas with these roads less traveled, and become potential disruptors themselves. Because, if you're afraid that a dead-end is going to be a blemish on your career, on your history, I think that you're actually stifling yourself. I think you have to free up, again, you have to free up your people, and to the best of your ability, just free them up from that particular fear, and help them have courage. Well, there will be some fear, but a little less fear, a little more courage, and I think senior management's critical.

Michael Krigsman: Well, I guess that’s a … one of the most important and fundamental lessons. We have just a minute left, and Christian, I know that you are a vet, and I know that you’re very supportive of vets, and would you like to take a minute and tell us about some of your activities in relation to that?

Christian Anschuetz: Aww, thank you. Thanks, Michael. Yeah, I mean, just a quick plug. I'm part of an organization called Project RELO. It's a fascinating organization that uses transitioning veteran instructors to teach corporate executives the art and science of leadership. And, that's done in a very unique fashion. In partnership with the United States military, we do pseud-military operations with this collective of executives and veterans and build a deep understanding that hiring our veterans is more than a social good, it's simply good business. If you want to learn more, check out projectrelo.org.

Michael Krigsman: Project reload, r – e – l – o – a – d-dot org.

Christian Anschuetz: Uhh, Project r – e – l – o-dot org. RELO.

Michael Krigsman: Got it! Okay! Check out projectrelo.org.

We have been talking with Christian Anschuetz, who is the former Chief Information Officer and now the Chief Digital Officer of UL, which everybody knows as Underwriter Laboratories. Christian, thank you for taking the time to be here with us today.

Christian Anschuetz: Thank you so much.

Michael Krigsman: We have more shows coming up, and they are great shows. Next week, we’re speaking with the CEO of Coursera, and he used to be the president of Yale University, so that’s going to be an interesting one. Check out cxotalk.com/episodes. Thanks everybody for watching, and we will see you next time. Bye-bye!

Automation, AI, and Business with Michael Chui (McKinsey) and David Bray (FCC)

Dr. David A. Bray, Chief Information Officer, Federal Communications Commission
Dr. David Bray
CIO
Federal Communications Commission
Dr. Michael Chui, Partner, McKinsey & Company
Dr. Michael Chui
Partner
McKinsey & Company
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Data and automation have the power to transform business and society. The impact of data on our lives will be profound as industry and the government make greater use of techniques such as artificial intelligence and machine learning. Explore this important topic with two world experts.

Dr. David A. Bray began work in public service at age 15, later serving in the private sector before returning as IT Chief for the CDC’s Bioterrorism Preparedness and Response Program during 9/11; volunteering to deploy to Afghanistan to “think differently” on military and humanitarian issues; and serving as a Senior Executive advocating for increased information interoperability, cybersecurity, and civil liberty protections. He completed a PhD in from Emory University’s business school and two post-docs at MIT and Harvard. He serves as a Visiting Executive In-Residence at Harvard University, a member of the Council on Foreign Relations, and a Visiting Associate at the University of Oxford. He has received both the Arthur S, Flemming Award and Roger W. Jones Award for Executive Leadership. In 2016, Business Insider named him one of the top “24 Americans Who Are Changing the World”. He is Chief Information Officer of the Federal Communications Commission.

Dr. Michael Chui is a partner at the McKinsey Global Institute (MGI), McKinsey's business and economics research arm. He leads research on the impact of information technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as Big Data, Web 2.0 and collaboration technologies, and the Internet of Things. Michael is a frequent speaker at major global conferences, and his research has been cited in leading publications around the world. His PhD dissertation, entitled "I Still Haven't Found What I'm Looking For: Web Searching as Query Refinement," examined Web user search behaviors and the usability of Web search engines.

Download Podcast

Automation, AI, and Business with Michael Chui (McKinsey) and David Bray (FCC)

Michael Krigsman: Welcome to Episode #219 of CxOTalk. I'm Michael Krigsman, an industry analyst, and your host. Today, we have such an interesting show. We're going to be talking about data, automation, machines, machine learning, AI, and the role of all of this in business today and what it means for the future.

Our guests are David Bray, who is … Well, David has been on the show a number of times; and David, why don’t you introduce yourself?

David Bray: I am David Bray, I refer to myself as a digital diplomat and human flight jacket, otherwise known as Chief Information Officer at the Federal Communications Commission. I should also mention that I'm an Eisenhower Fellow to Australia and Taiwan, which means I met with their government and industry leaders in both countries about what their plans are for the Internet of Things; and then briefly just mention how ten years ago, I met Michael Chui at the Oxford Internet Institute. We were actually working together on how one could best do what was called "distributive problem solving": how you could bring human and technology nodes, reach better outcomes and organizations, and so that's why it's so great to be here with Michael again talking about artificial intelligence.

Michael Krigsman: Fantastic! And, I met Michael Chui, who is with the McKinsey Global Institute through David. And, just after we arranged for Michael to be on this show, I saw him on CNBC in the most interesting segment. So, Michael Chui, welcome to CxOTalk, and please tell us briefly about yourself and what you do at McKinsey.

Michael Chui: Sure, delighted to. Like David, I was once a CIO of a public sector organization, but it was much smaller. It was a municipality. But now that I’m a partner in McKinsey Global Institute, I need some of our firm’s research. It’s part of the larger McKinsey company, a management consulting firm. At least some of our research on the impact of long-term technology trends were including some of the distributed problem-solving that David mentioned, but also artificial intelligence, robotics, and AI.

Michael Krigsman: Michael, you have been studying data analytics. You came out with a very rich report; a lengthy, deep, and interesting report; last December. Please share your view with us on data and digital disruption.

Michael Chui: Well, one of the things that we've been studying for quite some time is the potential impact of using data and analytics to change organizations and society. It's said, for instance, disrupting either industries or organizations … Our first publication was actually five years ago. It's the report on Big Data, and to an extent, the report that we published in December is a sequel. It's a good thing in movies, and sometimes it's a good thing in research, too.

One of the things we identified back in 2011 was the tremendous potential of using data and analytics to really change the game. And we looked at the public sector in Europe, we looked at health care in the United States, we looked at manufacturing and retail, and location-based services and said: In each of these domains, if you use data and analytics, either as a basis of competition, or as a way to increase the efficiency or effectiveness of what you’re doing, or change business models; we saw the potential for all kinds of good things to happen, both for the organizations themselves, whether or not it’s retail or they’re trying to sell more, but also a health care provider trying to improve healthcare outcomes of a country or an organization, etc.

We saw all kinds of potential. We said, “That could potentially happen within ten years.” And five years on, we said, “Let’s see how things are going. Let’s see how much value has been captured out of the billions of dollars of potential value.” And by the way, that’s not just profits, it’s improved healthcare outcomes, better services provided to citizens, etc.

And quite frankly, what we found in this last piece of research amongst the many findings that we discovered: When we had that sort of “Look into the rear-view mirror,” we said that the report card was actually quite mixed. Some organizations have really expanded their ability to use data and analytics, and some sectors have moved farther than others. And to be frank, those are sectors in which you had, in many cases, a digital native competitor you had to compete against. So retailers had moved along farther.

There are other sectors and organizations which, quite frankly, have been, on average, farther behind, capturing only about 10 or 30% of potential value. Unfortunately, some of those have been public sector. But again, there is great individual variation.

The other thing we found is the spread in performance between not only sectors but individual organizations, and I know David's been doing terrific things as the US FCC; and again, some of the organizations that have been doing the most have really extended their lead versus median or even lagging organizations.

Michael Krigsman: So, David, your view is a public policy view, especially when you wear your Eisenhower Fellow hat, so any thoughts on this from a public perspective?

David Bray: So, I agree 100% with what Michael said, that different sectors are embracing the opportunity provided by data analytics, artificial intelligence; and that it does seem to be, for those sectors where there is a digital native incumbent or an organization that is either a startup or already present, that has embraced the digital regime; that pressures the rest of those organizations to move along.

The public sector, on the other hand, you have the challenges of not just in the United States but around the world, governments are facing pressures to do more with less. Here in the United States, we had sequestration; but also talking to Australia, when I met with them, their public sector leaders were facing the possibility of a recession. In Taiwan, the economy was not growing as fast as it had previously. And so, here you have shrinking budgets, but at the same time, you are being asked to transform how you do your work; and so, it’s the challenge of how do you leapfrog from legacy investments in IT? You can’t be a startup, because you have to keep things running, and you have to keep on serving the public. At the same time, if you keep on doing things on-premise with legacy IT that’s on average five to ten years old, you won’t be able to get where you need to go.

And so, at the FCC, when I arrived three years ago, it was an interesting situation where they had nine CIOs in eight years, which I said was a great sign for CIO #10 that things are just going great; and I quickly surveyed that they had more than 207 different IT systems that were on average more than ten years old - we even had some that were approaching nineteen or twenty years of age - and they were consuming more than 85% of our budget, just to maintain those systems.

And so, that's where I said, "In two years or less," at the time, it was rather ambitious [proposal], and I think I had a lot of people that were a little surprised, "In two years or less, we want to have nothing on-premise at the FCC. We want to go straight to either public cloud, or a commercial service provider, because you cannot capture the benefits of data analytics, artificial intelligence, the Internet of Things, and making sense of the data that's coming from them if you are still tied to legacy IT."

And the good news is, two years later, we did it. We reduced our spending from 85% to 50%, and in a lot of ways, it was just setting the scene for getting ready for making sense of all these widespread data sources, making sense of what can be brought in from machine learning and AI.

Michael Krigsman: Michael, how do organizations make the decision to invest, and where should they invest? How do AI and machine learning come into play in a practical sense, as opposed to all the media hype or science fiction? How do we become practical about it?

Michael Chui: Well, a few things. I think one of the things that have happened over the years that we've been doing research on, as well as more organizations,  have started to understand the potential of data analytics, and then those applications to these techniques - AI and machine learning - is that awareness has certainly grown. In fact, amongst what's called "executives," whether they be public sector or private sector, it's actually … They're starting to understand that, in fact, data and analytics are either becoming a basis of competition or a basis of providing the services and products that your customers, citizens, and stakeholders need.

Thing is, we've reached that level of awareness, but as David and I have talked about in terms of actually what has been able to be captured, what sorts of value has actually been able to be generated, comes about for a number of reasons. And I think as we looked at organizations all around the world if you ask and executive or leader there, "Are you thinking about data? Are you thinking about analytics, and are you doing anything?", and almost everyone says, "Yes." And many would say, "Oh, we've got a successful pilot where we conducted this experiment. We've invested in the technology, we've invested in hiring some data scientists and analysts, etc. for software and hardware; on Cloud, or on-prem, what-have-you, while doing that transition."

What we found oftentimes: While there are oftentimes real technology challenges which take real investment, and time, and energy; and as a technologist myself; it’s very interesting to talk about those things. Oftentimes what we find the real barrier is, is around the people stuff. It is how do you get from an interesting experiment where there’s a business-relevant insight? We could increase the conversion rate by X percentage if we actually used this next product to buy an algorithm and this data, we could reduce the maintenance costs, or increase the uptime of this whole good. We could, in fact, bring more people into this public service because we can identify them better.

But, getting from that insight to really capture value at scale, is where we've started to find oftentimes organizations are either stuck or falling down. And, it really has to do with how do you bag that interesting insight, that thing that you capture, whether in it's in the form of some sort of machine learning algorithm, whether it's other types of analytics, into the practices and processes of an organization, so it really changes the way things operate at scale? To use a military metaphor: How do you steer that aircraft carrier? It's as true for freight ships as it is for military ships. They are hard things to turn.

And that organizational challenge of understanding the mindsets, having the right talent in place, and then really changing the practices at scale, I think that’s where we’re seeing a big difference between those organizations who have just reached awareness and maybe done something interesting, ones who have radically changed their performance in a positive way through data, analytics, and AI.

Michael Krigsman: I want to remind everybody that you’re watching CxOTalk. And, right now, there is a tweet chat that is taking place on Twitter, of course, using the hashtag #cxotalk. You can ask questions to our guests directly and they will answer.

David Bray: [Laughter]

Michael Krigsman: [Laughter] Well, we hope they’ll answer.

So, David Bray, Michael Chui was saying that the barrier to adoption is the people. Now, in the realm of AI and machine learning, how does this particular issue play out? Is there anything that’s unique about AI and machine learning that we need to be considering when we talk about adoption and the proliferation of these technologies in the enterprise in a meaningful way?

David Bray: So, that’s a really great question. First, I would say that I emphatically agree with what Michael was saying, that the real secret to success is changing what people do in an organization, that you can’t just roll out technology and say, “We’ve gone digital, but we didn’t change any of our business processes,” and expect to have any great outcomes. I have similarly seen, both in the private sector, and in the public sector, here in the United States, here in the Federal government, but also in other countries like Australia, Taiwan, and other places in Europe, where they’ll do experiments that are isolated from the rest of public service; and they say, “Well look, we’re doing these experiments over here!”, but they’re never translating to changing how you do the business of public service at scale.

And to do that requires not technology, but understanding the narrative of how the current processes work, why they’re being done that way in an organization, and then what is the to-be state, and how are you going to be that leader that shepherds the change from the as-is to the to-be state? For public service, we’re probably lacking conversations right now about how to dramatically deliver results differently and better to the public.

Now, for artificial intelligence, in some respects, it’s just a continuation of predictive analytics, a continuation of Big Data, it really is nothing new in terms of the fact that technology always changes the Art of the Possible, this is just a new Art of the Possible. I do think there’s an interesting thing in which it could offer a reflection of our own biases through artificial intelligence. If we’re not careful, we’ll roll out artificial intelligence, populating it with data from humans, [and] we know humans have biases and we’ll find out that the artificial intelligence itself, the machine learning itself, is biased.

At the same time, we could actually use it to say, “Look for biases and past outcomes, past decisions, past performance with this organization, and let us know where things weren’t exactly either as equitable or as beneficial as it could be. So AI could either A) be a dangerous tool where it just reflects and augments and amplifies human biases, or it gives us a chance to look in the mirror and say, “You know, you’re being biased when you make these decisions or these outcomes.” And I think that’s a little bit more unique than just a predictive analytics bias or Big Data.

Michael Krigsman: Michael, let’s drill down now into AI a little bit more deeply, and machine learning. In your research, what are some of the business areas that are today most well-suited, and where do you see this going?

Michael Chui: So, we did do some research in terms of trying to understand where there’s the greatest potential from some of these technologies. Like one of the interesting things that we discovered was … Well first of all, our hypothesis was as we look across about ten different industry sectors and then a dozen different types of problems in each, we expected, quite frankly, that much as we find for other techniques, the most of the value will be concentrated - 80% of the value coming from 20% of the problems, or something along those lines.

When we surveyed about 600 different industry experts, every single one of those problems we identified, at least one expert suggested it was one of the top three problems that machine learning could actually help improve. And so, what that actually says is the scope for potential is just absolutely huge. There’s almost no problem where AI and machine learning potentially couldn’t impact and improve performance.

A few things that come to mind: One is a lot of the most interesting and recent research has been in this field called “Deep Learning,” and that’s particularly suited for certain types of problems with pattern recognition, oftentimes images, etc. And so those problems that are somewhat similar to image recognition, pattern recognition, etc. are some of those that are quite amenable and interesting.

So again, in terms of very specific types of problems, predictive maintenance is huge. The ability to keep something from breaking, so rather than waiting until it breaks and then fixing it, the ability to predict when something's going to break not only because it reduces the cost, but because it's cheaper, in fact, to try and keep something from breaking rather than paying someone to fix it. The more important thing is the thing doesn't go down, so if you bring down a part of an assembly line, you bring down the entire factory or oftentimes the entire line. So being able to avoid that cost is the same kind of thing as a jet engine on a wing of an airplane, etc.

And so, to a certain extent, that is an example of pattern matching. When you have all of these sensors, which are the signals that actually reflect that something’s going to break and you should go and do some predictive things? And so, we find that across a huge number of specific industries that have these capital assets, whether it’s a generator, a building, an HDC system, or a vehicle, where if you’re able to predict ahead of time before something’s going to break, you should actually conduct some maintenance. That is one of the areas in which machine learning can be quite powerful.

But the other thing is, again, taking this idea that you have one problem with one area, you look for analogous problems. If you take health care as another case of predictive maintenance on the human capital asset, then you can start to think, "Well gosh! I have had the Internet of Things, I have sensors on a patient's body; can I tell before they're going to have a cardiac incident? Can I tell before someone's going to have a diabetic incident, that in fact, they should take some actions which could be relatively less expensive, and less invasive, than having that turn into an emergent case where they have to go through a very expensive, painful, and urgent care type of situation? Again, can you use machine learning to be able to do prediction? And those are some of the things that we're starting to see in terms of problems that can potentially be better solved by using AI and machine learning.

Michael Krigsman: David, a practical question from you, and then we have a few questions from Twitter. So, as a CIO, how much are you thinking about AI, machine learning, and predictive analytics in the operations of your organization?

David Bray: So, right now, I actually have an ask out to all the eighteen different bureaus and offices of the Federal Communications Commission to identify a bureau or office challenge or problem involving the public that they would like to have machine learning and artificial intelligence brought to bear. Woe be to the CIO that tries to force a solution onto a bureau or office that's not ready for it yet. And so, this is trying to see if they are receptive if they can spot something. And maybe it is identifying where we can provide it, as just Michael mentioned, preventative maintenance of services that can actually benefit the organization and benefit the public, making sense of comments that we receive.

We did actually, back in 2014, make public comments we received on a specific issue that involved four million comments, with the idea that we actually wanted to allow the public to use tools to bear, to make sense of them: sentiment analysis, understanding what was either a "for" or "against" proposition. And I think in some respects, public service has the opportunity that it's not necessarily in competition with any organization, we could actually make our data available; recognizing we need to protect privacy; but once we protect privacy, making that data available to the public sector and the private sector to make sense of it. We don’t have to do it by ourselves. And so, I think the opportunity for artificial intelligence and machine learning is what are those things - it’s a little bit harder at a national level - that will benefit the public?

I think a lot of things are going to happen first in cities. I mean, we’ve heard talk about smart cities. There, you can easily see where if you can actually have preventative maintenance on a road, or better providing of power, and actually monitoring it to avoid brownouts. I think actually the real practical, initial, early adopters of AI and machine learning are going to happen first at the city level in some respects, and then we’ve got to figure out how we can best use it at the federal level.

Michael Krigsman: We have some questions from Twitter, and one is from Bob Russelman. This is a really interesting one, and he’s asking about the impact of automation and AI on human employment. And I think when we talk about AI, robots, and autonomous systems, this is one of the big questions that come up. So Michael, what are your thoughts about that?

Michael Chui: Sure. So, about a month after we published this Age of Analytics report, and about one month ago, we published another report - and by the way, these are freely available on the web - which really looked at the potential for automation to affect employment and the global workforce.

So, a couple of things: One of the things that we did in this research was to look at not only every occupation, because we think it's quite rare that, in fact, you'll be able to remove someone out of a job and put an AI or robot in there that will do everything that they did. They actually conduct a number of different activities in any job. So, we looked at things at the level of individual activities, and scored them against eighteen different capabilities that could potentially be automated - everything from fine motor skills, navigating the physical world, cognitive tests such as problem-solving, sensory activities, and even understanding and producing natural language. One of the highlight findings is that about 50% of all the activities we pay people to do in the global workforce could potentially be automated by adapting currently-demonstrated technologies; which sounds scary, but … Wow! Fifty percent of the things that we pay people to do. But that's not going to happen overnight.

And again, part of our analysis was understanding what those timeframes might be. Now, we can’t predict the future, so we developed some scenarios with really wide bands around them. When you think about the requirements, I said theoretically, 50% of these activities could be automated. Really, it takes time to integrate those capabilities technologically and create individual solutions. And then beyond that, you have to create a business case, because what I didn’t say was this would cost less than it does for a person to do it. So again, compare the cost to develop and deploy these technologies against the cost of using people for doing the same things.

And then finally, the natural curve of adoption of any technology, which often takes eight to twenty-eight years after the time that something's commercially available to the time it reaches a plateau in its eventual full adoption, then it might take something like forty years or ten presidential terms. At least, that's the middle point of all scenarios that we modeled out before 50% of current activities are even automated.

What’s interesting is that level of change in what people do is not unprecedented. If you look back at 1900, about 40% of the US workforce was about [...] and agriculture, and about seventy years later, about 2% was. So what that actually says to us is that in fact, we need to find new things for people to do, as automation comes into play, so that people are complements of the work that machines are doing. And in fact, we need that quite badly because of aging. We need everybody working plus the robots working to have the economic growth that we need.

It's been done before. I'm a sunny Californian, so I'm hoping this can be done, but it will require real effort to make sure we actually find new activities for people to do and find ways to make sure they get paid to do those new activities as machines work alongside human beings.

Michael Krigsman: So very clearly, then, this technology has the potential to drive a major social upheaval! Michael, that’s essentially the implication of what you’re saying!

Michael Chui: Yeah. I think the question is what word do you want to use? I think “shift” is a different word than “upheaval”, and a different word than “disruption.” But, what we are saying is that all of us, because again, it’s not 50% of jobs, it’s nearly 100% of jobs will have a significant percentage of their activities that will change. How can we all have the flexibility, have the training, have the retraining, so that we’re enabled to do new things as we help use machines to improve our productivity?

David Bray: I would like to add to what Michael is saying because I agree. It really is about augmenting human capabilities as opposed to replacing human capabilities. We almost should be talking instead about artificial intelligence, we should be talking about augmented intelligence. And as we talked about earlier, what machine learning and AI is really good at are things that are pattern recognition and repetitive in nature.

So in some respects, I don't know what we humans want to do to things that are repetitive, rote, that are the same thing over and over again for hours at the end of the day. What this is really doing is freeing us up to focus on those jobs that are going to be nonroutine, where there is no pattern that is present; or even, in fact, where the machine tips and cues us and say, "I've identified something that fits outside of the pattern. You should pay attention to it. I don't know why it's happening, but that's going to require a human to take a look at it." It's almost freeing us up to focus on those things that require more creativity.

Now, that said, it does require us to ask interesting questions, which is 1) What skills should we be training; not just current students in school, so they can be ready for this future of working together in augmented capabilities, but also retraining these same workers so that they can be ready for this future that is not necessarily going to be rote and repetitive work for them, but instead is going to be about, “What is the non-routine work? What is the diagnostic work when a machine tips and cues you to pay attention to it?” We really do need to look at what the future is of pairing humans plus machine, working together, and what does that look like, as well as what new patterns of work will emerge as a result.

Michael Krigsman: And what about the ethical issues of this? It's so fascinating to me because we've got essentially a technology, or set of technologies and techniques that very quickly have cultural, social, and educational implications; and therefore, that immediately takes us down the ethical pathway. So, what about that?

David Bray: So …

Michael Chui: [...]

David Bray: Go ahead, Michael.

Michael Chui: Go ahead, David!

Michael Krigsman: [Laughter]

David Bray: You first, Michael!

Michael Krigsman: [Laughter]

Michael Chui: [Laughter] A couple of things. You know, let me just build on something that David said before in terms of the need for augmentation. You know, one ethical issue to bring to bear is what is it that we’ll need to make sure that the next generation actually has better lives than this generation? For the past fifty years in the biggest economies in the world, about half of the economic growth we’ve seen has come about because of increases in employment, about half of it because of increases in productivity; the ability to use machines and other management innovations to do more with fewer hours.

In the next fifty years, we’re basically going to lose half of our sources of economic growth. Why? Because countries are aging. The US is aging. China’s workforce is actually decreasing in size, and that’s a billion-and-a-half people. In Japan, it’s already happening. And so, unless we have everybody working, plus the robots working, we simply won’t have enough economic growth for the next generation to have better lives than we do. And so, that’s an ethical question already. It actually suggests we need to accelerate the use of automation. But that means that I think, Michael, [it’s time to] to get to your question. As David mentioned before, we have had, and this is true not only for AI but all technology, we have had a lot of our values and the technologies that we developed.

So you know lots of people talk about self-driving cars and this trolley problem. You know, if a car turns one way, if it turns the other way, it kills the people in the car: What should be done? That's a particularly stark and interesting philosophical discussion, but long before we start to need to worry about those in a really deep way, because to a large extent, the cars are not automating the ability in practice to do philosophy. They're incorporating algorithms about what they're seeing on the road.

I think more importantly, as many of these technologies; particularly machine learning, which is more about training computers rather than programming them; understand what it is that data you have in your training set was perhaps the most important that you had. As David said, sometimes that training set is biased in terms of the data that you selected. And that’s where this idea of not just using data, and using analytics, but using it well, is what’s most important. And I come back to this thing about: It’s not just the data analytics, it’s about the people who use it.

And that's a lot of what goes into being a good data scientist. How do you make sure that you understand, you know the provenance of the data, the biases that come about because you collected data? One of the most famous ones that a lot of us who spent time in data talk about is the ability to use a mobile device and in Boston, in order to determine where there are potholes. People drive around and the accelerometer notices a bump, and it says, "Oh, there might be a pothole there." And one of the things that, at the time was true, was there's a bit of a bias in that sample set based on who had smartphones at that time. So again, one needs to understand that biases come about, it's really an ethical issue about what training set you're using in order to train a machine learning algorithm.

Michael Krigsman: We have a couple of people on Twitter who have asked the same question or made the same comment. And I want to remind folks who are watching on Facebook, if you want to be part of the discussion right now, hop over to Twitter using the hashtag #cxotalk. You can watch on Facebook and chat over on Twitter.

So, Neil Raden and Bob Russelman have both raised the comment that in this new world of job training, what kind of skills are going to be needed for people to be trained and to adapt?

David Bray: So, I will actually pull into that question what Michael just said about biases. I think it is being aware of both your biases and other people's biases, and how that impacts what the machine does. I think it's something that if you're lucky, maybe you pick up either from your own childhood upbringing or from your schooling that I don't think we currently have significant forces focused on. I don't even know what the subject would be other than critical awareness of being aware of your biases and being aware of the biases of others, and how that impacts outcomes involving a machine and involving an organization. And so I think that's a new thing that doesn't exist, and in some respects, a machine can actually reveal to us.

I also think it's going to be about cognitive offloading of certain things, and being able to turn off the day. I can easily see someone getting so wrapped up in the fact that the machine doesn't have to sleep, the machine doesn't have to eat, and they end up fourteen hours later still involved working with a machine and not turning things off. And so you're beginning to see that already where people are saying, "You know, after about nine o'clock, ten o'clock at night if you email me, I'm not going to respond. I'll pick it up the next morning." I think being able to cognitively offload some of your work and recognize that a machine's going to keep on working in the background isokay. But, you, as a human, need to take care of yourself. That's also a skill. It's almost like how we deal with physical education for kids. We may need to equally do some version of cognitive relaxation awareness as to when do you turn off your device, and that you're not 24/7.

Michael Krigsman: A lot of social questions here. We’ve got only about ten minutes left, and one of the topics that I really wanted to talk about is: Michael speaks about the concept of “radical personalization.” I think that’s very important. Michael, would you tell us about that?

Michael Chui: Well, I think one of the things that we've often discovered when looking at data and analytics: Those of us who like data … You know, we look at averages and means in particular, and what we found is oftentimes the averages hide some of the most interesting insights. And so, being able to understand distributions has always been important when it comes to data; to use a marketing term, this idea of "segmentation." In fact, not every customer wants the same thing. Not every citizen wants the same thing. Not every citizen is going to benefit from the same sort of intervention, etc. And that’s one of the things we’ve known for many, many years.

But now that we have the technological capability to not only look at three customer segments based on demographics, or ten behavioral segments, really being able to help an individual based on what their needs are. You know, from a healthcare perspective: really understand, for example, their genetic makeup and then be able to customize something for an individual, a “segment of one” as the people in marketing say. I think that’s a capability which is now coming to the fore. One of the things that we know is that just thinking about people as individuals is something we naturally do as human beings, but being able to have our machines be able to do that as well is a lot of value to be created.

It does bring to mind, again, coming back to your question about ethics and values, how you ought to deal with the privacy question, because when you have enough information to be able to customize a service, a product, or a person, that means you do have some pretty interesting information about an individual. And so, you have some questions about how you want to handle that. But as soon as you are able to understand and handle that; provide that individual citizen, or customer,  or employee with the understanding of why their data is being used and how; then we can start to provide, as you described, a radical personalization. It is one of the things that we described in our report from December as being a potentially disruptive force. Because, many organizations really are set up to deal with groups, as opposed to individuals. And, when a competing organization comes about and say, “Well, I can provide you with exactly what you need in a very customized way,” that can really change the game.

David Bray: And I think that is going to be a fascinating area for the public sector to try and wrestle with, especially in republics and representative democracies. Historically, the public sector has provided the same service without any personalization to everybody, because we're trying to be equitable. And, we don't want people to say, "Well, they got preferential treatment or they got something special." But I think as consumers and citizens alike become almost expecting that they're going to get personalization from the private sector, and then they're going to look at the public sector and say, "Why aren't you treating me like an individual?", that is going to be a real thorny issue of how do we allow the public sector to do personalization of the services to you, but still, have checks and balances to make sure nobody is getting preferential treatment or biased treatment?

Or, it may very well be that people don't want to reveal the information necessary to give the personalization. And so, that's where actually I think for public service, and it may apply to other organizations as well, we almost need to say the Golden Rule of, "Do unto others as you would have do unto you," and tweak it slightly for what Michael was saying to be, "Do unto others as they will commit, and maybe even like you to do unto them." And so, we maybe have to figure out how can individuals in the public express what they want to be permitted done to their data to a government, what they would like to have done with it, and recognize that's going to have huge variability across nations and across the world.

Michael Krigsman: We have got a bunch of questions coming in from Twitter. And, we don’t have that much time left, but here’s an interesting one from Chad Barbier, and he asks: What applications are you finding that automation is working well for today? Anybody?

Michael Chui: I'm going to mention a couple of things. Some of the types of activities that we found have the greatest automation potential are physical activities in predictable environments. And so, a classic case of that is an assembly line. So, we're starting to see a lot of robotics being used in those types of situations. What's interesting is that robotics decrease the cost; we're starting to see them used in services as well. For instance, at home, I have a robotic vacuum. Some people say that until we figure out the problem, we call it a robot; afterward, we call it a "dishwasher." And so, I think on the physical side, that's happening.

On the more cognitive side are two other types of activities: collecting data and analyzing data. And many times, I think people who are watching or listening will recognize this: How many times are there systems where I have to look something up on this system, and type it into a computer, or cut and paste, or copy and paste, etc.? There are a set of technologies described as robotic process automation. They’re not robots, they’re software robots, but they automate some of these processes, which are, as David says, are these boring things where I’m just taking something from this application, copying it, and pasting it into this one, and all those really rote, simple, and super annoying things. We’re seeing more and more organizations try to deploy those types of software robots to take away that really annoying work from human beings.

Michael Krigsman: And David, your thoughts on what is working well today, in terms of automation.

David Bray: So, there actually was a competition about two years ago on Craigle to see if anyone could write an algorithm that could grade a purse for a third-grade teacher; so, find the same sentence mistakes and grammar mistakes. And for about sixty-thousand dollars, someone actually wrote an algorithm that succeeded in doing that. And so, amplifying what Michael was saying, I think it is … My interests, particularly because I'm in public service, are what are those things that we can do to remove the rote, repetitive work from individuals so they can focus on unique problem-solving things they need to do. So, I think it is making sense of large amounts of data to find errors, to correct things, to give recommendations back, and then to tip and cue a human to pay attention. Those are the things right now that I think are working today.

I think the challenge is there are a lot of cases [where] those systems that can make sense of patterns, and cap tip and cue humans, don’t have access to a sufficient amount of data on things that are useful to the public - whether it’s because we need to make sure we protect privacy, or because that data’s right now is not in a format that can actually be used by the machine, [etc.] I think we need to have better conversations about what are the top maybe three challenges we want to solve as a nation, and then identification of what data, as well as algorithms, we can bring to bear. But that technology exists today to find interesting patterns, to find things that are missing, and to make corrections.

Michael Krigsman: We’ve got just about five minutes left, and Michael, would you share a distilled summary with us of your thoughts about where this is going in the near-term, and practical advice that you have for managers and business leaders who are looking at this changing landscape and feeling a little bit confused about what to do?

Michael Chui: A couple of quick things. You know, one is we talked about data a lot, and I think one of the things that we found, and my colleagues who are helping various organizations around the world find, is that there is usually value just sitting on the table; because in most cases, organizations have access to a lot of data, whether it’s data within organizations or external or open data. And, a very small percentage of the value gets captured from that data that is already sitting there. So, Number One: Figure out what you can do with the stuff that you already have access to.

And then, the second thing which is actually the harder thing: which is that because data analytics, AI, and machine learning can actually add value to almost any process, the hardest thing oftentimes for an organization is to figure out what it should do first. And, that really just requires you to map out where you can do things, and then prioritize the things that create the most value, and you can capture most easily.

And finally, the last thing I’d say is you’ve got to solve the technical problems, but the hardest problems that we’ve talked about several times are how you move an organization. And, that requires just leadership, and so working on the leadership side to move an organization to use these technologies well is what’s important.

Michael Krigsman: And David Bray, your thoughts on how you move an organization, as Michael was just saying, to be able to take advantage of these technologies in the right way?

David Bray: So, sort of looping back how ten years ago, Michael and I were researching distributive problem-solving networks, I think you need to recognize that no one person is going to know 1) all the data that is of value within the organization, and no one person's going to know the processes that can best lead themselves to being adaptive and improved. So, almost, in some respects, you want to crowdsource it within your organization, and you want to champion saying if anyone can come to me with an interesting pitch on the inside that says, "Look, if we brought this data, and this data together, we'd have these insights. And then, we could tackle this process, and you almost treat it like an internal venture capitalist.

That shifts the role of the CIO from being responsible, for being top-down, and having to supposedly know everything in a rapidly changing world, to being almost a human flight jacket and champion of anyone who can bring interesting data to bear, that can inform how the organization can do better and improve those processes. And I think that’s required because this is changing so quickly, and at the end of the day, we are changing what people are doing. You are changing how they work, and they’re going to feel threatened if they’re not bought into, “I’m okay with changing this process because I see the better outcome that will come as a result.”

And so, I think that’s almost an imperative for CIOs to really work closely with their Chief Executive Officers and say, “What I will do is, I will effectively serve as an internal venture capitalist on the inside, for how we bring data, to bring process improvements and organizational performance improvements - and work it across the entire organization as a whole.

Michael Krigsman: Well clearly, these new technologies, data automation, AI, and machine learning, have the dual component of the technology and then the organizational implications. And while that’s true of any technology that hits the enterprise, it seems the potential implications seem even greater in this case.

You have been watching Episode #219 of CxOTalk. We’ve been speaking with David Bray, who is the CIO of the Federal Communications Commission, and Michael Chui, who is a partner at McKinsey with the McKinsey Global Institute. Gentlemen, thank you so much for taking the time today!

Michael Chui: Thank you, Michael!

David Bray: Thank you, Michael!

Michael Krigsman: It has been a great discussion and I invite everybody to come back next week because we'll be doing it again with another great show. Thanks for watching. Bye-bye!

Disruption in Education, with Rick Levin, CEO, Coursera

Rick Levin, CEO, Coursera
Rick Levin
Chief Executive Officer
Coursera
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Companies like Coursera are changing education dramatically. From higher education to vocational and skills training, online courses offer high quality instruction at lower cost than ever before. On this episode, we talk with an online education pioneer to learn about the impact of technology on modern education.

Rick Levin is the Chief Executive Officer of Coursera. In 2013, he completed a twenty-year term as President of Yale University, during which time he played an integral role in growing the University’s programs, resources and reputation internationally. He was named to the Yale faculty in 1974 and spent the next two decades teaching, conducting research, serving on committees and working in administration at the university. Rick served on President Obama’s Council of Advisors for Science and Technology. He is a director of American Express and C3 Energy. He is a Fellow of the American Academy of Sciences and the American Philosophical Society. Rick received his Bachelor's degree in History from Stanford University and studied Politics and Philosophy at Oxford University, where he earned a Bachelor of Letters degree. In 1974, Rick received his Ph.D. from Yale University and holds Honorary Degrees from Harvard, Princeton, Oxford, and Peking Universities. 

Salesforce in Focus: The Analyst Perspective

Paul Greenberg, President, The 56 Group
Paul Greenberg
President
The 56 Group
Liz Herbert, VP and Principal Analyst, Forrester
Liz Herbert
Vice President and Principal Analyst
Forrester
Sheryl Kingstone, Director, 451 Research
Sheryl Kingstone
Director
451 Research
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Salesforce is one of the most important SaaS leaders in the digital economy. What makes this company significant and can it sustain that leadership position? We talk with three top industry analysts to get the inside story on Salesforce.com.

In addition to being the author of the best-selling CRM at the Speed of Light: Essential Customer Strategies for the 21st Century, Paul Greenberg is President of The 56 Group, LLC, an enterprise applications consulting services firm, focused on CRM strategic services and a founding partner of the CRM training company, BPT Partners, LLC, a training venture composed of a number of CRM luminaries.

Liz Herbert is Vice President and Principal Analyst at Forrester Research. She focuses on the technology services industry, helping clients navigate this fast-changing market and maximize the business value of their technology investments. As a principal analyst, Liz helps clients understand the key dynamics in the enterprise application services market and make smart technology sourcing decisions. Specific topics of focus include SaaS, SAP and Oracle Services, and services for cloud and mobile.

Research Director at the 451 GroupSheryl Kingstone focuses on improving the customer experience across all interaction channels for customer acquisition and loyalty. She helps operator and enterprise clients make decisions regarding the use of technology, business processes and data to boost revenue and optimize business performance. She also assists vendors with custom research projects, messaging and positioning, as well as product road map evaluations.

Download Podcast

Salesforce in Focus: The Analyst Perspective

Michael Krigsman: Welcome to Episode #217 of CxOTalk. I’m Michael Krigsman, industry analyst, and the host of CxOTalk. These episodes bring the most insightful, experienced people in the world to have conversations about the impact of technology on our society, on culture, on business organization. Today, we are doing a special show with a focus on Salesforce.com, and we have three truly extraordinary industry analysts who have been covering Salesforce practically from the beginning, in some cases, maybe from before the beginning, in fact.

I’m going to start with Liz Herbert, who is from Forrester Research. Hey, Liz. How are you doing?

Liz Herbert: Hi! I’m good! I’m glad to be here today!

Michael Krigsman: So Liz, very briefly, what do you cover at Forrester?

Liz Herbert: So, I'm on Forrester's applications team, and the two key areas that I focus on here for Forrester are Software as a Service and services that surround ecosystems like the one that Salesforce has built up at this point. So, there are the two main things. And, for anyone who doesn't know Forrester, we work a lot with end-user clients as well as with the vendor community. So, hopefully, I'll be able to share with you perspectives from sort of a wide range we've been tracking here.

Michael Krigsman: Fantastic! You know, and I forgot to mention that right now, there is a tweet chat going on using the hashtag #cxotalk. So, please participate. You can ask questions, and we'll try to get those answered.

I also want to mention that I want to give a huge thank you to Livestream for providing our video infrastructure and distribution. And so, Livestream, thank you very much. They are just a fantastically great company, and I'm very grateful for their support of CxOTalk.

Our second panelist is Paul Greenberg. Paul, tell us about yourself and where your focus is.

Paul Greenberg: Well, I’m Managing Principal of the 56 Group, LLC, which is actually 55 more than the number of employees I have. Then, I’m the author of “CRM at the Speed of Light,” and I'm covering … I started out in CRM, but I've expanded my coverage to customer engagement, customer experience; pretty much anything with the word customer in front of it, or customer-facing department.

Michael Krigsman: I will mention that Paul is very modest. His influence and impact on CRM, customer experience, CRM marketing, and the CRM industry are very, very substantial and significant. He's just so well-respected.

Our next respected analyst and panelist is Sheryl Kingstone from the 451 Group. Hey, Sheryl. Thank you for being here!

Sheryl Kingstone: Hi! Thanks, Michael for inviting me. And, I am the Research Director at 451 Research, and my channel or area of expertise is customer experience in commerce. So, I cover that area along with my team, and it looks at everything from the CX stack including Adtech, to Martech, to service, to commerce. So, it's nice to see some of the changes there, along with all the different deployment strategies and the infrastructure that goes along with it. We do look at it from an end-user point of view, and then we also work with a lot of the vendors.

Michael Krigsman: So, we have an amazing panel today. I think we should begin by placing Salesforce into context; maybe, discussing the history of Salesforce briefly. It has got a run-rate of roughly ten billion dollars, and they've announced revenue guidance for 2018 of ten billion dollars. So, it's a large company and a growing SaaS company. Sheryl, maybe you can begin; tell us why we should care about Salesforce?

Sheryl Kingstone: We care about Salesforce because they're a disruptive force in the industry. They were the pioneer of a new deployment strategy and way of delivering software. They came at it in 1999 and every one of us; I know myself, and Liz, and Paul; have really followed Salesforce from the beginning. When they started out, they were truly focused on sales, and they quickly grew to the entire CX stack. But, what's fascinating is what they bring to the table is more than an application. It really is a platform of engagement. And that's what changed. They shift the market around CRM to be less around a transactional system of record, and much more around a system of engagement that combines both aspects of what we need to do today to deliver effective customer experiences.

Paul Greenberg: You know, to add to what Sheryl is saying: The thing with Salesforce which has always been incredible is that they didn’t even invent the cloud. The cloud was there before they were, right? But, what they did do, which goes to Sheryl’s point on delivery is they popularized it. They actually made it something that everybody started to use. That’s what their initial disruption was from. Now, since 2003, they’ve been a platform-focused company, even though their revenue comes mostly from sales cloud and service cloud, and the like.

I had a conversation with Tien Tzuo, who was at the time, the Chief Marketing Officer of Salesforce back in 2003. He told me that … I asked him, “What’s your future going to be?”, because I think they had just gotten the CRM designation on the stock pitch [...]. We figured, CRM. But, he said, “Well, CRM is just a means to an end for us.” He said, “What we’re really looking at is being,” I forget the exact way he worded it, but something along these lines, “the web service that all business applications are run on.” And, they’ve never, ever taken their eyes off that idea. Not for all these years from then on. Never.

Sheryl Kingstone: They really haven't, Paul. One of the things I do want to point out is even before they got the CRM acronym when he [Marc Benioff] was creating the market, in 2000-2001 doing roadshows, being frustrated in the back room that he named the customer "Salesforce," and that really wasn't his goal. He didn't want to do that, but, C'est la vie, it was what it was; and he couldn't even get forty people in the room for a roadshow. And now, when you look at what Dreamforce is, at 150-180 thousand - he has to bring in a cruise ship - those are dramatic results.

Michael Krigsman: Liz, what has caused this dramatic result, as Sheryl just said?

Liz Herbert: Yeah. I mean, I agree with everything that Sheryl and Paul have been saying. You know, one of the things we've seen is what Paul mentioned, and Sheryl as well; that is with Salesforce really pioneering the move into cloud computing for the enterprise in particular. Clients that we're speaking with have gotten much more comfortable with cloud computing. In fact, a lot of the concerns that we used to see in the past like security, like performance, like integration and customizations, those are perceived as advantages today. Clients I've talked to today now think they'll have better security when they go to the cloud because Salesforce can invest many times more than we ever could. Actually, the idea of a new kind of customization that's faster and maybe perhaps more vanilla is a better thing. It gets us out of that cycle of technical debt that we've all been dealing with.

And so, if I look at what we’re seeing today in terms of what our clients are thinking, not only are they viewing it as able to go head-to-head with any large enterprise technology, they trust it to run mission-critical parts of their business.

And then lastly, I would echo something that Sheryl started talking about, which is clients are viewing Salesforce as the way they can transform business. One of the most interesting changes we’ve seen over the past couple of years is that companies are even looking beyond Salesforce as a platform or a suite of applications, and they’re looking to it as an enabler of new business models. We saw this, for example, with the GM, General Motors example that got announced at Dreamforce a couple of years ago, where GM is like other automotive companies looking to become more of a connected car company, and they’re also looking to grow their revenue streams. Salesforce is actually an integral part of one of those new revenue streams, which is the way that they serve out marketing in the car when someone’s driving by restaurants, coffee shops, etc. that they’re interested in.

You know, to me, that really is what has gotten people so excited about Salesforce the company, and also shows where this next generation of Cloud is going.

Paul Greenberg: Yeah. And that actually goes to the heart of a couple of things that Salesforce does. One thing they do better than anyone else; and one thing that they do, no-one else does.

The thing they do better than anyone else is they actually are capable of executing on the idea of vision. And, that means something. Well, you know, here’s the thing: Vision has two parts, right? Part One is an idea of how things are going to look. However, if you go on to Part Two, that’s just science fiction, right? Part Two is the person hearing it is thinking, “Oh, you know what? I see myself as part of that!” That’s what Marc Benioff’s talent is. He actually can sell as vision, not just to customers and prospects, but to his own employees.

So, typical of most tech companies, Salesforce makes product announcements [about products] they don’t have yet. However, they have about - let’s call it 60% of the product. That’s just a number I’m making up; I don’t know that’s the real number. But, the thing is, they tend to actually get the rest of that much quicker than most other companies because the employees buy into it, right? And then the employees put the time and effort into it. So that’s the thing they do better. They have a real vision, it gets presented in a way that the people get engaged with.

Liz Herbert: And also …

Paul Greenberg: Well, one more thing.

The second piece is Ignite, right? They have a program called Ignite, and Ignite is a program that is not really … Oddly, it’s vendor-agnostic in terms of the way you approach it. What it is, is basically a way that as a trusted advisor, a Salesforce employee and staff member will go in with their prospects and companies, or customers, and say, “Hi! Let’s discuss business transformation. Let’s figure out how to do it. Let’s be innovative. Let’s work this through.” And they’re not saying, “Let’s use Salesforce,” they’re just saying, “Let’s come up with ideas.” They leave them both with their framework and methodology, and new business models, to Liz’s point.

Liz Herbert: [Laughter] I would also say one other thing that they do better than any other company that I've seen. [They] get clients excited about Product in a way that they're willing to go on record and share their stories. One of the things that are very notable about Salesforce is they do paint this vision, and exactly as you said, how they're great about following through. Then, they get the customers up on stage. But, whether that's literally up on stage in Dreamforce, which Sheryl mentioned has just become a humongous tech industry event, and really an entertainment industry event at this point … And on top of that, if you look at YouTube or any other public channel, they go further than any company I've ever seen in terms of that transparency and depth into which you can find real customers talking about real stories that they do. They take that from vision to reality. It's exactly as you said, Paul.

Michael Krigsman: But, they’re …

Sheryl Kingstone: [...] to add with that, though. One other point on the customers is, I don’t want to make this that they’re doing absolutely everything perfectly, because, everyone makes mistakes. There are still lessons to be learned and problems with the user interface, with some ways of looking at it. But, the point is that customer are willing to be upfront with what was working, and what was not working. They have a sense of transparency that no other software company really delivers on today, and that’s something that the whole industry has to learn on; because the world has shifted to true transparency.

Liz Herbert: [...]

Sheryl Kingstone: Views out there … We need metrics, and we need maturity models. And that is something that Salesforce has always been very upfront with. And maybe, they’ve stated it themselves, “Look, we fell behind in X, Y, Z. We’re now working on it,” or, “This is where we’re going to work on roadmaps.” And so, that sense of transparency adds trust to their customers and the customers feel confident in saying, “This is how we’re going to use the next generation of the platform.”

Paul Greenberg: You know, that actually … To Sheryl's point, I actually remember [what] the story was about. Three or four years ago, I went to the DC version of Dreamforce. They do these roadshows everywhere. And this one, oddly this was in DC, they had 14 thousand people, right? And, Marc Benioff, the CEO, stood up and said, "Look, audience. I want to pass our messaging that we're going to potentially use at Dreamforce with you." And he pushed the Social Enterprise, which turned into an utter failure, of course. But, he actually tested it and said that's what he was doing with the audience, and then was soliciting feedback. I've never in my life heard that from any, let's say, leader of a company speaking on stage! Their messaging is well-crafted and focused, and typically, they don't let you know that from that standpoint, it isn't the absolute truth about everything. This guy was saying, "Look. This is a test. I don't know if it works or not. Why don't you tell me?"

Michael Krigsman: Yeah. But Paul, one thing about this. I mean clearly, Salesforce’s marketing is incredible, and their vision is incredible as well. But, they have taken stabs at things like that whole “Social Enterprise,” and they presented it as the way of the world. Yes, as …

Paul Greenberg: [unintelligible]

Michael Krigsman: As they were developing the message, yes. But at the same time, at Dreamforce, it’s the way of the world.

Paul Greenberg: But, look. The answer is yes, they do that, but so does every company that you’re ever going to go to at any conference. All of combined have probably been to what? Four million and two hundred thousand conferences? Somewhere in that vicinity? Not far off from the truth, either. So, every single one of the companies does exactly that.

Look. Salesforce says … It’s funny you bring that up, because that goes to a different part of Salesforce, but it’s not just the marketing part [where] marketing says, “We are the ones.” You know, look. I have a standing mantra on the “We are the only ones,” and my mantra is very simple. It’s as follows: “Look, if you’re not telling me that you’ve just invented a new element in the Periodic Table, you’re not the only one,” right? So, you know, don’t even say it, right? But they all say it.

The thing with Salesforce, though, is they have this extraordinary culture. There really is an actual, genuinely extraordinary culture. I know a lot of just lower-level employees at the company. Several of them are literally the children of people I’ve known for forty years, right? And they love the company. They absolutely love it: the benefits, everything about it, and the workplace environment. If you take that culture, which is one of … And also the fact that they take political stands, they diverse, or attempt to be diverse, I should say; they’re doing all the things that, let’s say, the younger generations and everyone; but at least the younger generation; should be concerned about in the workplace in doing that. In the meantime, it is the most explosive, dynamic company in the technology world.

But, what happens is, when that happens, you find people in the company who become arrogant - very arrogant, in fact. I mean, you know, I fired shots close to their bowel for that reason a few times. And why? Because the people misread what the culture is. Now, the culture doesn't support or enable that in any way. They can actually disable it, and discourage it. Their culture's really good. But, those kinds of definitives, "We are the only ones," or, "We're it,” and individuals … Those are, let’s call it “outlier results” of things …

On the one hand, the individuals doing the outliers; on the other hand, the messaging of “We are the only,” is the message; and they’re going to always do what every company does. Salesforce seems more definitive because they are the most dynamic of the companies.

Michael Krigsman: What are some of the challenges? We’ve had this real lovefest. Are there any challenges facing Salesforce, or is everything perfect?

Liz Herbert: I mean, I'll start with a couple. I know we've all got some ideas on this. You know, one of the biggest challenges that we're seeing is the thread from new competition. You know, when Salesforce first started emerging onto the enterprise application scene, really from Forrester's standpoint, we really didn't start seriously tracking them until 2004, even until 2005. And even at that time, most people were skeptical that the whole Software as a Service model could really materialize into something for a large-scale, mission-critical type of application the way that it has, today. And you saw that same behavior with the leading vendors. Most vendors were still very negative on the whole idea of Software as a Service. Other large ISPs had attempts at "as a Service," but it was never really front-and-center with their strategy. And, if you look at the competitive landscape for Salesforce today, it's exactly the opposite, right? Everybody's talking "cloud," everybody wants to be the leader of the cloud world; and to me, only in the last year or two have we even seen real, viable threats to Salesforce.

To me, that's one of their biggest threats, which is they used to be in a unique position to be able to go out there and win these deals. Nowadays, they've got much steeper competition coming from, really everywhere. And then on top of that, what you see is like anybody, as you start to grow, your culture does start to [...]. And one of the things we've identified with clients we're working with is that Salesforce is behaving a lot more like the traditional, large ISB behavior. They're a lot more difficult to deal with. They're actually getting a little bit more aggressive on discounting and costs, and some of the traditional factors that our clients always complained about with any of the large on-premise ISB.

To me, that sort of … I wouldn’t want to say it’s overly negative, but there is this growing negative sentiment in their customer base because of some of that behavior, combined with the threat of new competition. To me, this is going to be one of the biggest challenges for them in the next few years.

Sheryl Kingstone: I agree with everything Liz just said. They do have challenges with respect to competition. I would say, looking at some of the issues that they are [having], and the struggles that they’re having today, it has to do with transitioning to improving that UI [user interface].

So they were cutting-edge at first, and now enterprise companies are still struggling with the best way to maximize the information in it. We're still struggling with that 360° view of the customer across all the different clouds. We're still struggling with the ability to really maximize the return on investment because, in the early 2000's, we did make the case for premise-based versus SaaS. But, we were actually targeting, even though they thought it was a low-end type of solution, the way we made the case was targeting Siebel because Siebel was a multimillion dollar deployment. It was a no-brainer.

Now, we have a different environment. And so, now we have to understand issues around pricing. Whether the pricing models are changing, what we are doing with best practices of getting quick information in and out of these systems … Again, it really comes down to ease of deployment and ease of use, which is sometimes out of the hands of Salesforce. It's, "What do you do with the software itself?" And, that was a problem that Siebel had. When you appear more towards a platform play, you're also enabling the enterprises themselves to make mistakes and implement something that's actually not usable within an organization. And that's what they're trying to do with Ignite. But, the other thing with Ignite is it's very high-level. It's business transformation, where you also need to pay attention to grassroots deployments.

How is every user within the organization leveraging that platform for their use? How easy is it to get in? How easy is it to find the information, and how easy is it to collaborate on that information? And that’s something that’s going to be a stumbling block, still.

Michael Krigsman: Well clearly, as Salesforce has grown, it is no longer just about the technology, but rather; how does that technology have an impact on the business, inside the customer organization? As you said, the role of the key roles of that Ignite program is to support the transformation of customers.

Now, we have a couple of questions from Twitter, so let's go to them. So, Liz, there is really for you, from the B2B News Network. And you mentioned, Liz, earlier, that customers are looking at Salesforce as a platform enabler for new revenue streams. And, the B2B News Network is wondering if you or anybody could give some examples of that?

Liz Herbert: Sure. Yeah, I mean I gave the example of General Motors. They were one that was talked about publicly, so I know they're one that we are allowed to say. So, just to recap that example: GM, as part of its connected car, recognizes there are many ways they can create additional revenues beyond the early use case of OnStar, which was part of a health and safety-type of solution back in the nineties. Actually, when GM watched OnStar, they realized that marketing, and collecting advertising revenues as a driver was going on their way. Maybe getting an ad for a local Starbucks, or something like that would be a new revenue stream for GM. It would have really opened them up into a new kind of an engagement with their customers, that perhaps, hadn't really been there before. So, this is an example where they realized that [while] the technology that had made OnStar very successful actually had kind of a pioneering technology in the nineties, it really was too brittle to be able to sustain this new revenue stream.

Salesforce is part of that modern, OnStar platform, and it does represent some new revenue streams for them. I would say this: A lot of clients that I'm speaking with are looking at it. I know in some of the Salesforce events, they've also talked about a lot of companies who, even beyond that Ignite program, they're looking to know …  Because every company is now kind of becoming a SaaS company, that's something that we've been doing a lot of research around: whether you're in financial services, manufacturing, or health care, a lot of what you're doing now is acting as a SaaS company and more and more sales correspond to being part of that.

It’s hard to say too many more examples. I have to check which ones are public, and which ones we’ve known through our NDA’s, so I’ll probably hold off on more than just that GM example right now.

Paul Greenberg: Well, I mean, another one which is a little bit nebulous, but it actually goes through a problem Salesforce has more than this company: Philips Healthtech is another one. Philips Healthtech is one of Salesforce’s longstanding customers. They use other things. They use Oracle for some things, and alike, too; but, they’ve collaborated very closely with Salesforce especially since Philips itself split into two companies. And, the interesting thing with Philips Healthtech, is that Philips Healthtech started out originally as kind of devices and software, and then transformed to a platform company, now. And they’re selling multiple services. They’re still [selling] hardware and software, but it’s primarily platform-based. It is expanding the service offerings greatly, and they’re building that out in conjunction with Salesforce.

Where Salesforce has a problem, though, is it kind of goes to, to me, one of the biggest issues. All of these things; and why it's not going to be easy to not be able to present these kinds of examples is they're arguably one of the great platform-thinking companies that I've ever seen. They've made is easy to be part of the company with the app exchange originally, the APIs are open. It's very easy to get into the marketplace with Salesforce. And, when Leyla Seka created it, she's now back on it.

The thing is that the problem is, where do they fall down when it comes to the other part that makes a company successful and scalable to the levels that Salesforce wants to go; and partner well, which is ecosystems - and they’re not ecosystems thinkers. They have a natural, organic ecosystem that is unlike anything I’ve ever seen, but they don’t know how to take advantage of it at all. And that’s why you see it almost every time they deal with a partner; they end up relegated to the app exchange, instead of kind of strategic, “Go to market” kind of approaches.

Now, they have a program in place with ISPs now that has some promise, but there's no bridge between the app exchange and getting to that program. Number One, typically when they do a strategic relationship with a company, it's just basically, "Stick ‘em on the price list and forget ‘em," right? And that's it. And there's only eight of those.

So, that makes it difficult to create the Philips Healthtech relationships, to carry them, and expand them much longer even though they've been successful up to a point. But, it's hard for them, as a company, to get past that point until they actually do make it platforms and ecosystems combined.

Michael Krigsman: And in fact, that’s going back to the DNA of Salesforce from the early days, building the platform with the app exchange, and being able to present lots of apps on that platform to have partners extended.

We have another question from Twitter, from Jesús Hoyos, who many of us know. And, Jesús Hoyos is asking, “How do you see Salesforce moving from a traditional B2B CRM stack, to a combined B2B, and B2C stack with so many clouds?” So, in other words, they are now selling to customers, and to businesses. And that’s the movement, right? So how do they manage that transition, and is that actually happening?

Sheryl Kingstone: Part of what they’ve done is they’ve traditionally been very [...]. So, yes. They were very strong in the B2B sales-oriented approach. However, they’ve made key investments that people don’t realize over the past year, that were beyond Demandware. With the applet, with Heroku, and some other platform issues that are clearly giving them some expertise there. They’ve made some investments on the marketing side, with their exact target, and they’re reworking some of that.

What was absolutely critical was their acquisition of Demandware. And that changed the game. And, I know that I'm a little biased because I do look at customer experience and commerce as a complete entity, but that gives them the expertise to look from their customers' customers' point of view. It gives them the data, and more of a connection into that business-to-consumer [space], that will actually benefit the marketing cloud. And, in the end, that will change the game for them to give them more expertise that they didn't have on a B2C approach, versus the traditional sales automation opportunity forecasting side of it.

And then, when you look beyond that, that expertise that they also have with the service cloud … Because, when we look at the industry differences between a B2B company and a B2C company, there are similarities, but there are direct differences between how we handle those interactions with our customers, especially when you’re talking about a scalability issue. So, we do have to understand how to shift the dynamics into self-service. They’ve made movements there, they’ve made acquisitions in, for example, great acquisitions in a relatively unknown company around live message; again, bringing in expertise about how to interact with consumers.

And, all of this also brings [us] into data. So, in the future, as they look towards leveraging, for instance, Krux, where they’re taking the combination of Krux with commerce, and they’re expertise in marketing, that does give them a lot more insight to connect and show brands how to connect with their consumers.

So, no longer are they a B2B company, especially with some of these marquis acquisitions that they have, because the have the insight, and they have the technology. Now, they really need the partnerships; and again, it goes back to what Paul said - the ecosystems to execute more effectively in a B2C model.

Liz Herbert: And I would just add that you see the same trend playing out within their services provider ecosystem; [an] ecosystem, of course, means lots and lots of different things, but specifically the partners who have been helping clients go live, and get value out of this solution. If you look at who those partners were ten, fifteen years ago, it was very much the companies who are focused on enterprise application rollouts. Accenture and Deloitte were early partners. They had a number of boutiques really focused on sales and service. For the last couple of years, something interesting has happened. You actually see agencies now in the Salesforce ecosystem. So [...] is now a Salesforce partner. And so to me, when you look at how this technology really goes live and gets successful within a client account, that service provider ecosystem is such a big part. And that's another ship that will help them get there. It’s not all on Salesforce that they’ve been able at least to start to attract some of these service providers that are really experts in the B2C world.

Michael Krigsman: It sounds to me like this is the natural evolution of a software company as it grows, evolves, and becomes a large company and matures.

Paul Greenberg: Well, the natural evolution of a company that wants world domination, yeah. I mean, it's not the natural evolution of most companies [in] that they're hitting an entirely different category of existence that they want, right? I mean, which is what they're doing.

I mean, what Liz and Sheryl both said was on the money. There isn’t a whole lot to add on that. The only thing I will .. When we were at the analyst seminar that you all remember, they were making continuous- let’s say slightly louder than a whisper references to B2C on a fairly regular basis, even though they never sat and spent the time to discuss or verify it. They’re just all-in right now on that. And, the key is to Sheryl’s point was Demandware.

And also, the other aspect: They’re completely revamping all their thinking about customer journeys too. You see, a B2C customer journey is a whole other universe.

Sheryl Kingstone: [...]

Paul Greenberg: Go ahead!

Michael Krigsman: Go ahead, Sheryl.

Sheryl Kingstone: [...] But the other thing we haven't discussed, speaking of acquisition and where they're done, eleven - and this has nothing to do with B2B agencies, it's the platform again - but eleven acquisitions in artificial intelligence and machine learning. And, what are they going to do with that? The reason I'm bringing it up is we know what they're going to do. I'm saying, let's discuss letting that into their business applications. The other thing we have to point out, and one of the things I said earlier, it is about the data. So, one of the criteria that they do have to work with over the next five years, is getting their customers on board with opening up that data stream for educating Einstein for improvements. But, if we do talk about digital transformation and where companies are shifting in the future, a lot of it has to do with leveraging the data. What I am throwing out to the team is: How are we going to get beyond this so that Salesforce can also use some of the key assets that they have today, for their customers?

Paul Greenberg: Well, you know, we’ll have to get Marc himself to open the LinkedIn APIs up.

The thing is that I think a lot of it does go to both not just use of data, but to what you were saying more about use of data, and the AI user data - the combination, really. They're taking a very explicit [stance], when it comes to AI, which is Einstein's part of the platform. Where Oracle, for example, is absolutely adamant that it's not part of a platform, it is [rather] effectively added into applications. That's literally the way Melissa Boxer and Company look at it.

To me, the single most important thing that came out of DreamWare was putting Einstein and platform [on it], because that gives them the opportunity to effectively shape and formulate its use in anything they're doing. And you saw all the use-cases they had in that gigantic wheel of which they probably have three of them. But, the reality is that the initial group is going to be pretty much on a combination of, from what I understand anyway, a combination of - let's call it, "engagement intelligence," and the equivalent of telling a story one way or the other; narrative things like that, right? I don't exactly know. They phrase it differently, but those are the two concepts, really; meaning, take the insights and actually do something with them. But, that's where they're seeing Phase One of Einstein across multiple clouds. I know Marketing Cloud sees it that way explicitly.

Michael Krigsman: Well, it seems to me that they are still figuring this out. As Sheryl said, eleven acquisitions in AI really tells a big part of this story, which is their intention, as well as the importance that they place upon AI in their future: using and distributing AI and AI benefits to their customers across all different parts of their platform. But these are very early days, as far as that goes.

Paul Greenberg: Yeah. I mean, keep in mind AI itself has been around a long time. It's starting to become something that's being seen as necessary, as in scale. And Salesforce really has no option but to get involved in it, really. I mean, aside from their direct competitors, the big guys, Big Four-and-a-Half - I heard Adobe's the "half," right? The Big Four-and-a-Half is … You'll potentially have Death by a Thousand Cuts there, too. Little companies like Cogito, Agent.ai, Radius, and all these other companies who are doing pretty substantially advanced kinds of AI work that have real business results that they can point to. There are hundreds of companies like that right now. I mean, I'm not saying they're all equally good, but I can probably sit down and list 20 off the top of my head that are [...]. And, Salesforce has no option but to either compete with them or to bring them into their orbit. And, Einstein, in effect has to be …

And look at their B2C market. What’s the big deal now? You’re talking about voice interfaces like Alexa, and so on and so forth. You’re talking about all that kind of stuff, but guess what? Amazon put a thousand people on there. A thousand people on the Echo, and the Alexa, and that whole idea of new interfaces for the use of advanced artificial intelligence. And then you have robotic automation? I mean, Salesforce has no option but to go all-in. Yes, it’s early-stage, but they have to move really fast.

Liz Herbert: Yeah, exactly. I mean, I think I would just add to that, Paul, that you already brought up Alexa. I mean, to me, the new breed of competitors that they're fighting within this territory, it is the players that have historically been considered infrastructure-oriented.

Paul Greenberg: Right.

Liz Herbert: So, AWS, Google, Microsoft; that’s that side of the dynamics apps that are more on the Azure stack; all of those companies that you’ve alluded to; they’ve got some pretty deep investments, whether it’s voice recognition, image recognition - those are a couple of ones; AWS, and that’s in their particular customer event this year.

And, exactly. I see that Salesforce not only has to keep pace with customer expectations based on what goes on in the apps world but these cloud platform providers as well who keep moving up the stack with their own ecosystems, marketplaces, developer tools, etc., etc. They're not exactly commodity infrastructure plays anymore, nowadays.

And, the other point I would go back to is what you said right up front, which is: This is another place where Salesforce really needs to figure out its broader ecosystem strategy because no one company is going to own AI. I mean …

Paul Greenberg: Yeah.

Liz Herbert: … you’ve got bots, and you’re going to have more industry-specific use cases like bots in health care. Every service provider under the sun has its own AI play. You’ve got IPSoft, and some of the investments they’ve made. So, to me, that’s not a place where Salesforce really can try to be all things to all people. It has to figure out what exactly does Einstein do, where does it hit some of these partners, where does it work together with these other companies?

Paul Greenberg: Absolutely.

Sheryl Kingstone: And it goes back to the business case. So that's why I raised it out. Putting it in the platform is great; putting it in the application or powering those use-cases is essential, which is what Salesforce has done. But, Liz raises a great point about the infrastructure companies are coming up fast, and when we look at the rise of what I call "invisible interfaces," now all of a sudden, again, these traditional business applications have the ability to be relegated into the unknown so they become very behind-the-scenes. Well, what are we going to do to make sure that they're powering those invisible user experiences, and staying relevant to their customers? And that's another thing that they have to transition. Think about how Siebel didn't remain relevant because they were relegated to a system of record. The Same thing is going to happen as we maybe, finally start shifting to speech.

We’ve been there; we’ve been talking … This isn’t new. We’ve been talking about speech for twenty years. Nuance tried to do it for years overlaying business applications. But now we’ve got, as you said, Paul, a thousand people using new technologies, and then putting intelligence of it; it is going to change the way we interact with applications.

Michael Krigsman: And one of the extraordinary things about Salesforce, it seems to me, is the way they are able to be Agile, and to change and adapt over time, and to innovate and stay focused. I mean, think about some of their larger competitors who have lost their way, and they’ve had trouble innovating and bringing products into focus in a reasonable timeframe to market. I mean, Salesforce has done an incredible job being able to do that.

Paul Greenberg: Yeah, but I don't underestimate their competitors either. You know, that's sort of, to me, a lot of that is an urban myth. I mean, look. One of the most innovative companies that I've ever seen is SAP. They just don't know what to say about it once they do it, right? I mean, they just can't get it out the door, right? But they're…

Michael Krigsman: But Paul, I don’t mean to interrupt, but isn’t that … If we talk about innovation, innovation that doesn’t get out the door; is that innovation?

Paul Greenberg: Well, yes. That's metaphorical, right? I mean, they're just bad marketers, is what I'm really saying, right? You know, the thing is with Salesforce ... The problem with the speed of change is that it's accelerating now. And, you know, even if we said earlier, AI has been around a long time, it's just the last one. Eight months or a year [ago], they were really talking it up, like, massively; everywhere. And I'm not saying that people haven't been concerned, but it's becoming a matter of popular lore now, right? You know, that AI is something that has to happen.

What Sheryl was talking about would be the transformation of user interfaces. Look, I used a product that was literally called "HAL 2000," in 2000, right? It was a voice interface using; what was it called; "X-10," right? It was for the house. It existed! And it wasn't bad, you know? Not good either, but it wasn't bad. But, this stuff has been around.

But now, what’s happening is it’s becoming popularized. It’s becoming part of culture. It’s becoming part of how people do what they do. It’s not a coincidence that when the Echo first came out, it was an amazing device, but it didn’t do anything, really. And I had it! And, me and Brent Liu were talking about it, in fact, and saying, “Well, this is amazing, but it doesn’t do anything right now. But someday … “ I mean, you knew it, right? Someday, this is going to be something.

And then now look at it. Five million of them sold over Christmas, right? Five million! You're talking about something that's actually weirdly as sexy and hot and exciting as it is. It's becoming commoditized already, right? I mean, it's like … And that's why there are a thousand people turning it into natural language.

But, this is what Salesforce has to put up with! Guess what? What else have we been talking about for fifteen years? The idea of the consumerization of IT. But, that's just a matter of how we think inside of work, right? And that's what we're really talking about. We're talking about consumer IT; we're talking about, "We want it at work," because that's what we think about. The smartphone was the first thing like that. Now we're starting to wear all these things.

Salesforce has to stay up with all of that, and one of the reasons for that is because they are a … This is of their own doing. They're the most, in terms of, let's call it, "following," they're the most Apple-like company in all of enterprise technology and all of business technology. They have fanboys and girls. What kind of company-of-business has fanboys?

Michael Krigsman: Well, there is …

Paul Greenberg: [...]

Michael Krigsman: There is no doubt about that! And …

Paul Greenberg: Yeah, but that means they have to actually utilize the customers who buy them. That’s the one difference. These are the ones who make these multimillion dollar decisions.

Michael Krigsman: They have …

Paul Greenberg: No, go ahead.

Michael Krigsman: They have found a way to activate those fans. And that is the Holy Grail of marketing. Liz is shaking her head.

Liz Herbert: I would say, exactly. I mean, something very unique about them in terms of how they can continue to innovate at a fast pace is they are loved by business users.

Paul Greenberg: Yeah.

Liz Herbert: They have always had hearts and wallets of business users, and that’s something most of these Four and a Half, maybe not so much the Half, but the other four ISB’s - they’ve struggled with that; I mean, they’ve always been perceived as much more IT-centered versus business. To me, when you look at exactly what Sheryl said earlier; it’s going to be about the business case, who is going to win in these battles, and, it’s either innovations coming more from the ideas that you’re getting really from the business directly, the horse’s mouth. You do some advantages. I think today, they still have a leg-up on some of the competition because of that.

Sheryl Kingstone: And, wait a second. I'm going to go back to the other, other thing that really did push them into the winning case. Let's still go back to the platform, and what they did to enable the long-tailed development of applications. Because if they stayed at that core, vanilla, "Here are your sales, here's your service," they would never have been where they are today because the maturity of applications that are used in the industry today across multiple different verticals are those one-off applications that Salesforce can't create. And those are what makes things sticky. And so, I keep going back to, it's great that they are focusing on the platform because they are enabling their customers to create sticky applications that stay. But, the also need guidance. And so, that's where Mike comes into play.

Michael Krigsman: Okay.

Paul Greenberg: And, to your point on the platform: The other thing with them, and I’m going to go all the way back to the idea of their vision; they’ve never, ever, ever taken their eyes off the fact that they want to be a platform and not an applications company.

Michael Krigsman: And, very customer-centric, as well.

Paul Greenberg: Yup.

Michael Krigsman: We are basically out of time. But, I think we should do one last quick round-Robin of each of your offering your thoughts, comments, and advice for buyers, advice for Salesforce; any last, closing thought. Who would like to go first?

Paul Greenberg: Okay. I'll take it. So, it's what I said earlier. Salesforce, you're at the point now where you have to think in combination with Platform, that ecosystem. That means harnessing the organic ecosystem you actually have, right? And then doing something in a go-to-market, strategic sense with them, and then making that whole ecosystem - that partner technology ecosystem - work. That is, to me, a strategic goal as an organization changing the way your structure works. Everything about you, that's what you've got to do.

Liz Herbert: My advice is more for buyers. I work a lot with buyers who are in it with Salesforce. As Sheryl mentioned earlier, the idea of lock-in, which very much is happening. So my advice is, don't go too fast. We have a lot of clients out there today who believe speed is everything, they believe business agility is the Holy Grail, they believe fast is better than perfect, there are all sorts of flavors of this. But, sometimes this can actually be a downside, because Salesforce has executed in every single division, all with different strategies, with no central thread. Usually, it ends up a mess, it ends up expensive, you end up stuck, and in the long-run, you're not going to have a lot of agility when you're moving that way. So, we're trying to encourage our clients that, "Yes, it's great that you can go a lot faster, you can do releases much more quickly; that doesn't mean that you should just do it ad-hoc and with no planning if you're really viewing it as a strategic platform, and especially when you're thinking about using it for that long." [That's] because, it's not equally suited to every app under the sun, the way some might believe. And that's not a place where we [want to] see clients get into trouble.

Sheryl Kingstone: Lastly, fundamentally, the applications haven’t changed. It really is all about simplicity and user experience. And, the one thing that can bite you is if you don’t think through that workflow, and that business process and what you’re delivering to your end-customers. And, the data. I keep going back to the data. You are going to the owners of some critical information that could change the game for your business decision making, and if you’re basing decisions on poor data, which is what historically has happened with a lot of CRM applications, you’re dead in the water. So you have to make sure that A) Your users are using the application, which means it’s simple. B) That it’s clean, and they have the right data, and C) That it’s complete, right?

So, take a look at all the data that you’re missing today. Capture everything, because that knowledge is essential for leveraging it towards the future.

Michael Krigsman: Alright. Wow. We have been talking with three of the top industry analysts in the world, covering Salesforce.com. You have been watching Episode #217 of CxOTalk, and I want to thank Liz Herbert, from Forrester Research, Paul Greenberg, from the 56 Group, and Sheryl Kingstone, from 451 Research. And everybody, thank you so much for taking the time. I really appreciate the three of you being here today.

Paul Greenberg: Thank you.

Liz Herbert: Thank you.

Michael Krigsman: Thanks for watching CxOTalk. We have an amazing show coming up this coming Friday, so check the schedule: cxotalk.com/episodes. And, thanks for participating everybody. Bye-bye!

AI and the Digital Healthcare Revolution

Shari Langemak, Editorial Director, Medscape Deutschland
Dr. Shari Langemak
Editorial Director
Medscape Deutschland
Daniel Kraft, Faculty Chair, Medicine, Singularity University
Dr. Daniel Kraft
Faculty Chair, Medicine
Singularity University
Dr. David A. Bray, Chief Information Officer, Federal Communications Commission
Dr. David Bray
CIO
Federal Communications Commission
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Artificial intelligence is disrupting healthcare, from the largest institutions to the most intimate doctor-patient interactions. What’s driving digital medicine, and what challenges might get in the way? Hear insights from our panel including Shari Langmak, editorial director of the physician network Medscape in Germany, Singularity University’s faculty chair Daniel Kraft, a scientist and inventor and David Bray, Chief Information Officer of the Federal Communications Commission.

Shari Langemak is a physician, a journalist and a digital health strategist. Her talks and lectures are mainly concerned with the opportunities and challenges of an ongoing digitalization and medical innovation. Apart from her job and speaking activities, she advises several startups. She works as Editorial Director of the German branch of the multi-language physician network Medscape. She graduated from Ludwig Maximilian University (LMU) of Munich with a degree in medicine, gained practical experience in London and Shanghai, and finished her PhD at the LMU’s department of Psychiatry. Furthermore, she completed an MBA program at IE Business School in Madrid.

Daniel Kraft is a Stanford and Harvard trained physician-scientist, inventor, entrepreneur and innovator. With over 20 years of experience in clinical practice, biomedical research and healthcare innovation, Kraft has chaired the Medicine Track for Singularity University since its inception. He is the founder and Executive Director of Exponential Medicine since 2011, a conference that explores convergent, rapidly developing technologies and their potential in biomedicine and healthcare.

Download Podcast

AI and the Digital Healthcare Revolution

Michael Krigsman: ... And, it is  a great product. So, thank you to Livestream.

Today's show, we're talking about AI in healthcare, and we have an amazing group of people joining us. And, let's begin with Shari Langemak, who is with Medscape. Shari?

Shari Langemak: Hi, nice to meet you. I'm really looking forward to the discussion. Just a few words about myself: I work for the German edition of Medscape. I'm based in Berlin, that's why it's a bit darker in here. I'm basically very involved in the digital health scene here in Europe. I advise a couple startups and investors here in Berlin. And, digital health and especially AI is my passion. [Laughter]

Michael Krigsman: Fantastic, Shari! And next, in no particular order - I'm choosing by the order in which they're displayed on my screen - is David Bray, who has been on CXOTalk a number of times. And, is the Chief Information Officer of the FCC. David Bray, welcome back!

David Bray: Thanks for having me, Michael. And, it's always a pleasure to be here. I have to admit, while in my current capacity, I don't do healthcare, coming from the Centers for Disease Control in the past, and being involved with the bioterrorism perparedness response  program, I'm very interested into how we can make improvements as to how do we respond to disruptive health events both locally as well as globally.

Michael Krigsman: David Bray, thanks a lot. Last, but not least, is Daniel Kraft, who is doing too many things to count. And, Daniel Kraft, why don't you tell us about yourself?

Daniel Kraft: Hey! I'm a physician, scientist by background, trained in [...] medicine, pediatrics, hematology, oncology, and my academic role now chairing medicine at Singularity University, where we look at where are technologies heading, fast-moving, or exponential ones, and the ability to leverage those challenges from an education, environment, to healthcare and beyond. And I founded a program out of singularity university in 2011 called "Exponential Medicine," where we look at how we might leverage things like artificial intelligence, to low-cost genomics, to drones, to big data, and re-shaping and reinventing health and medicine across the spectrum.

Michael Krigsman: Fantastic! So, Shari, you gave a talk not too long ago, in which you describe some of the key disruptions, having to do with data and other things relating to healthcare. So, as an overview, do you want to maybe just share some of those thoughts with us?

Shari Langemak:  Absolutely! Absolutely. So, we are progressing so much in healthcare right now. We're white, I mean, we're collecting lots and lots data, more and more data and only due to studies that are conducted worldwide, but also due to on our mobile phones. So, everybody's using health apps, we get a lot of information from that. And with the help of this data, we will finally be able to treat diseases in a much better way. So, we speak about the era of precision medicine. So, a patient is not on the disease, or symptom anymore, but he is a person with many different factors we can take into account for his own treatment that can be the genome, his own genome, or a tumor genome, and even his microbiome or his treatment preferences. So, with the collection of all this data, we have, right now, we are finally able to find the right treatment for each and every patient.

But of course, this is one of the topics we'll probably dive a bit deeper into today: It's very, very hard to take all these different factors into account, because for a physician to keep up with the speed of information, with the speed of knowledge, is very, very, very hard. So, we need some sort of algorithm, some sort of machine learning to conduct and to help us to support our decisions.

Michael Krigsman: Daniel, what about this notion of algorithms and machine learning?

Daniel Kraft: Well, as Shari mentioned, we're in this exponential age where you want to get to true precision, personalized medicine, and give the right drug, the right therapy, the right prevention, diagnostics, and therapy. The challenge is it's really hard to connect all those dots right now, and any physician, pharma person, anybody who's trying to make sense of all this data, our brains are challenged, our brains haven't had an upgrade in at least a million and two years, but our wearables, our devices, our ability to compute is changing at an exponential rate.

So the challenge now for a clinician is to integrate someone's digital exhaustion, the wearable devices on mooring 3: the Withings Watch, the Apple Watch, and O-Ring, to integrate [...], to integrate the latest guidelines and publications. The average physician, at least in the United States, only reads journals 3-4 hours a month, and there's no way, again, to integrate into practice all that information. So we need machine learning, AI, big data, just to synthesize some of that, to bring the right diagnostics, therapy, for that patient at that point of care. And that's sort of the promise of [...] AI, machine learning, and big data as for now, in an era where a lot of it is becoming available, but how to make it actually useful, provide better outcomes at lower costs is still a huge challenge.

Michael Krigsman: And David Bray, what about the policy implications, and the intersection of healthcare, AI, and what do we do about all of this?

David Bray: So, wearing on my Chief Information Officer hat, what I love about what both Shari and Daniel were saying is really about how we can use AI almost to augmented intelligence for both the physician, as well as the patient, because as they said, we're in an era of exponential data, in terms of new therapies, but also everything that's possible that can determine your health outcomes. And so, policy-wise, I think we need to think about, for the patient, how can we think about providing choices, so that they actually have informed choices about both what they want to know, or what they want to have now with their data. And some of us may want to have our data more shared, because maybe it means better health outcomes, but others we may not want to know everything because we're not ready for that sort of carrying into the future in maybe nine, maybe fifteen years from now, based on our genomics  we may have this complication in our health. And that means being an informed choice.

Obviously, I'm not a physician, I am a CIO. I do think we need to think about how do we address the data, how do we address the sensors that are collecting it, and then finally, how do we make sure that you have a locus of control as to what's done with your data, and what algorithms are running with your permission and which ones aren't. And I think that's very key.

Michael Krigsman: Shari and Daniel, you are both physicians. And, what about the impact on medicine, and the role of the physician, and how can physicians make use of this data, and where are we in the process of having this data, and then being able to use it in machine learning, and AI, and can you give us practical examples? I'm throwing a lot at you all at once.

Daniel Kraft: Well, I'll start. I think you start with the question about algorithms, which may be a bit different than AI. A lot of healthcare is, you know, in terms of what we do as physicians, is look at a patient, they're complaining of pain when they pee, and so we check their urine: Is is positive for nitrates and bacteria? We'll then maybe it's a urinary tract infection. That doesn't take going to medical school to learn. A lot of basic common healthcare issues can be triaged for these partially diagnosed with pretty straightforward decision tree charts; and they're now - not an explosion - but some examples of chatbots, or simple, early AI, where it can ask you about your symptoms, and can tell you is that belly pain likely to be appendicitis, or just indigestion?

So I think there's sort of the simple side of algorithms, which in many cases, where there is not a lot of medical care; many parts of the world don't have access to physicians, or it's expensive to reach them, we can do a lot with simple, through your SMS or your smartphones, provide algorithmic-based, especially triage, and health education. And then we get farther up the spectrum. When I'm seeing a patient with a urinary tract infection, instead of just giving them the standard antibiotic, maybe we'll have other information about their renal function, their BMI, what dose you might want to give based on their pharmacogenetics, from a 23andMe-type profile, or their full genome, which is coming down to a thousand, or maybe a hundred dollars this year.

So you can go from simple algorithms, pick up the urinary tract infection with an algorithm, maybe even call in the prescription by a bot, and deliver it by a drone; but then maybe get much more personalized, to pick the most appropriate antibiotic that's safest, that will give you the best outcome based on that individual, the best information from the CDC, all available at the right time and at the right place, and do that in the right functional and low-cost manner. That's just one small example.

Shari Langemak: I might add to the role of the physician. I think it will change significantly in the next years already. So every time I'm at a medical conference speaking, the physicians are actually afraid that they would be replaced. But, I highly doubt that this will be the case, at least in the mid-term. It's rather a tool for the physician to help him to make better decisions. They want me to replace ... We know from very recent studies, that while AI in some cases is better at diagnosing, for example, a rare disease in certain cases than the physician, the best outcomes are when we have when physician and AI work together.

So, you can think about it like you check, or you type in the symptoms, you check with the genome, some sort of basic lab parameters, and then you get a recommendation or from AI, and then still the physician needs to check if that's really the case, and if you trust this recommendation. I think this quality check is very, very crucial. We won't get rid of it in the next years, for sure.

David Bray: And I actually have a question for both Daniel and Shari, in that for Daniel, my question is do you find that you feel like you have a locus of control over the data that's being collected from the different sensors you're wearing. And then for Shari, I guess the question is: As a physician, what would be the best way for algorithms and/or an AI to present new information or novel information to you in a path that you could actually absorb, and actually integrate into your practice and care?

Daniel Kraft: Well I'll start. I mean, great question. I think today, clinicians are still overwhelmed by having to spend half their time typing electronic medical record notes, [at] double the time they have face-time with their patients. The flow of data from wearbles or Omex, etc. is not really integrated into the workflow of most clinicians, at least in the United States. And we're just starting to enter this era from very intermittent data being very reactive. You know, waiting for disease to happen, having broken feedback loops from blood pressure to blood sugar, to be much more continuous with their data, and much proactive as individuals, as physicians, etc. So, right now, all of these consumer devices; I was just at a consumer electronics show two week ago; there's even more from tracking mothers and pregnant women, and the baby in-utero, all the way to tracking the sunlight exposure in your sleep.

What's just starting to happen in the last years, is that the data can flow, in my case through my Apple Health Kit in my Apply Phone into my electronic medical record at Stanford, where my physician can start to see that and vet it into the EMR. But right now, he may have 2000 [...] who want to log in to look at everyone's stats, and blood pressure, and other data. We need to have the AI/machine learning sift through that information and present to the doctor the five patients in his practice who you may need to call and bring in today, based on their blood pressure, their sleep data, maybe their respiratory rate, picked up by their mattress.

So today, we have a lot of sensors, this Internet of Things is blending into the internet of medical and health things, but still the docs aren't really connected. There's not a lot of interoperability, and the core clinician doesn't want to see more raw data. It needs to be synthesized, so it's actually useful in a timely way, and that the clinician is rewarded for doing this. Can they bill for doing an e-visit, for looking at the data? As cherished [as it is], I think the role of the physician is key; we're not going to get replaced by AI, but it's going to augment our skills, and enable us hopefully to be much more proactive with our patients from keeping them healthy, to thinking of a disease earlier, and then managing chronic diseases in smarter, evidence-based, and feedback-loop ways.

David Bray: Excellent!

Shari Langemak: Yes, and so your question was about how should it be presented to physicians in a clinical context, right? I think we already see some small examples of that. I mean, the very basic AI in medicine is basically that every software checks if there are interactions between medication. We have that already for a long time, and some sort of AI and machine learning as well. The other implication: physicians are already working with AI in medical imaging. So, they get some suggestion what diagnosis is behind a CT image or some sort. But we will see more and more AI that suggests some diseases, and I think we have to make sure it's less about how this data is presented, and how the possible diagnosis is presented. But educate physicians about what their limitation may be of AI is, and that they should always question - not maybe always - but in many cases, question the background of this, because AI is basically based on studies on data, and it's only as good as the data feed. So, if we don't have good study data, or if the angle of research changes, also the recommendations can be fought.

David Bray: I'm hearing my computer science background heart sing, because the mantra of "garbage in, garbage out," we need to keep that in mind when we do algorithms and AI.

Daniel Kraft: On that point, I think what's exciting about this age is that hopefully the future of healthcare that we practise medicine isn't going to be evidence-based, meaning we look for double-blind, procedural controlled trial of a patient just like the one I have, which usually isn't the average patient, but much more practice-based medicine: mining data from all the Epics and Cerners, and NHS data , say for David, who's got this particular condition with this genotype, this looks like it will be the most efficacious therapy, and that we can continue to sort of crowdsource that information. There's still a lot of data-blocking: pharma companies, EMRs, hospitals, academics don't talk to each other, but if we can start to collect information, mention crowdsourcing earlier like when we drive with Google Maps and Waze, we're used to sharing some private information: our speed and our location. In exchange, we get a map of the traffic, and we can adapt around.

We can use that same mindset across healthcare, and that the information isn't just garbage, but it's synthesized from thousands or millions of people, and we have our own sort of healthcare map: A GP has to guide me in my healthcare journey, or  a patient I may have, and those maps keep getting refined, and when there's a traffic jam, you're saying we can learn to route around this. That's an opportunity to improve the data sources, because the way we collected data, did clinical trials, is set to be shifted dramatically if we leverage some of these tools with the right regulatory reimbursement and other mindsets.

Michael Krigsman: But how do you make this happen, because even if it can be proven scientificially, medically, to work, and lead to better outcomes, there are so many entrenched interests - political interests, physician interests, economic interests, insurance interests. How do you ... You're talking about an overhaul of the way not only that we practise medicine, but the way we think about it. It seems like an almost impossible task.

Daniel Kraft: Well, there's a lot of misaligned incentives in healthcare that many healthcare systems; in the US, there's hundreds of systesm. Kaiser or Geisinger can operate differently than a fee-for-service place. So part of it is aligning incentives as we're moving from fee-for-service healthcare, at least in the United States, to more value-based care, we're going to be rewarding technologies and systems that give us better outcomes that we can measure, whether it's keeping someone out of the hospital with heart failure, or doing a smarter, earlier job of diagnosing a cancer, then using, like IBM Watson's already done, to help figure out the best sort of therapies for particular lung cancer patients. So, no oncologist could synthesize from all the new molecular markers, and different combinations of drug therapy.

So, does this sort of lie in the interests? It's not going to happen everywhere in some systems. Germany, things can happen that can't happen here. NHS has great leverage. The VA can do things in a smarter way. So, we need to align those interests. It's not going to happen at once.

Michael Krigsman: So, there's an alignment of ... So there's a, say, coincidence of technology on the one hand, with all of the social, the economic, political pressures, constraints, objectives, on the other hand. And so, where's that intersection between the technology of AI and the data, and these other factors?

David Bray: So, Shari, I'd be interested in Germany's perspective, because if I'm correct, and correct me if I'm wrong, Shari, you actually have tougher privacy laws than we do in the United States, is that correct?

Shari Langemak: Yeah. Actually, we struggle quite a bit here in Germany to implement new healthcare solutions, to implement ... Startups really struggle a lot to find new solutions, because there are a lot of data protection rules, but we have many, many laws that prohibit innovation here in Europe and especially in Germany. I have kind of mixed feelings about that, because of course, I think especially when it comes to AI, especially when it comes to big data, data security and high privacy laws are very, very important, but you must ensure at the same time that they don't prohibit innovation, because it's so important to reduce our cost in healthcare. As we all know, we are barely able to cover all the rising costs of healthcare right now, especially drug prices, and the rising cost for people with chronic diseases. So we must find a way to allow innovation at the same time, and still protect the individual.

And I think, especially AI, one way to do that is to have transparency, basically. I think companies must show what algorithm they use on what data the recommendations are based, so that we can still, afterwards, check if the recommendations are valid if we might need to change the algorithm, or all things like that, basically.

David Bray: Alright. And one of the things that I would say from my own experiences as a CIO is you don't want to be top-down when you're dealing with many different players. In fact, it's exactly what Daniel said: If you want to think about what are the incentives that will help encourage people to find their own paths in the direction we want to go, and so if the direction we want to go is holistically treating the patient, making sure it's outcome-based, and actually trying to make sure that we're thinking about how we make sense of this data overflow, then the question for us is, "What are the incentives both in the private sector and the public sector that will encourage innovators to move in that direction?"

Daniel Kraft: I mean, I think this whole value-based approach is going to drive the incentives. If I'm a physician, and I'm not paid to see more patients and do more procedures, but to keep you healthier, to glean better outcomes, I am much more likely to use the AI agent to help me pick the right drug and dose, because I'll hopefully get rewarded in some form, whether it's just pending on patient outcome, to a bonus at the end of the year for having patients with good blood pressure control, or being picked out before they end up needing hospitalization. It may be even in a few years malpractice not to use the AI in doing diagnosis and therapy. We all know the issues of medical errors, they make the equivalent of a 747 crashing every week or two.

A lot's happening in a hospital setting. We still treat patients based on our old experience around what journal article we just read, and I think as we're incentivized to get better outcomes and rewarded in smart ways, both financially and otherwise, it's going to drive the adoption of these.

And it's going to be disruptive of certain fields. Dermatology, radiology, pathology are all based on pattern-recognition. A lot of what a physician does, is learn, "This is what a sick patient looks like. This is a constellation of symptoms." But we may not catch that Zebra or we might miss something, and the more we can leverage this and again, combine it ... That won't replace the clinician, but using a combination can give us hopefully better outcomes, and enable a primary care doctor in rural Rwanda, using an AI app to do skin exams, pick up early ebola, or other things that might have global health implications as we're all getting more super connected and the world becomes more globalized, including issues that David knows well: bioterrorism.

Michael Krigsman: Are there policy, or let me put it this way: What are the policy implications? You know, there's legal implications, for example, the legal changes that will need to take place to support this; other types of government policy as well.

Shari Langemak: They have started, so I think we have to answer a couple of questions: How do we ensure this quality control we have talked about? To what extent do we want to use AI, and am I allowed not to want to use AI? As a patient, can I say I don't want to have AI be used in my treatment? So that's a very tough question, right? Because maybe the outcome isn't as good, and this patient might cost a lot of money to our healthcare system. Another important thing is who's responsible if something goes wrong, right? If AI makes a recommendation and it's the wrong one, is the physician in the end responsible, or the company?

So, I think the only way to answer these tough questions, because most of them also have a very critical, ethical background, can only be solved in a public discussion somehow, at least in a discussion where all the major stakeholders involved in bringing their view into the discussion.

Daniel Kraft: Essentially, I don't know that we can run medicine policy by voting or debates, and folks who may not have a good picture of what practicing medicine looks like, or where AI may be in a couple years. I mean, AI is moving pretty quickly, and we often in this exponential age don't appreciate what's going to be here in a couple years, and how powerful it might be. I agree, who's liable for this information? Just like with self-driving cars, eventually, someone's going to get hit by one, and who gets sued? It is the self-driving car software? Is it the person who owns the car? There's so much data flying out, so I'm wearing a little patch right now trying to do the live demo that's streaming my vital signs to my smartphone, and I can literally be sending to David and Sheri my 24/7 EKG, which you might see here. ... Hope it looks okay in my [...] out there.

David Bray: You look very relaxed.

Daniel Kraft: Yeah.

Shari Langemak: Yeah.

Daniel Kraft: Who's liable for looking at that data in real-time? What algorithm parses that? [...] your rhythm is going on here, not just your EKG, but your sleep data, and beyond. I think we need to be careful about that over legislating this, and allowing it to sort of have some room to expand, but in balance, the malpractice laws at the same time.

Michael Krigsman: By the way ...

David Bray: I was just going to say, as CIO, I would love to do experimental pilots, and so my question for both Shari and Daniel: If you could design an experimental pilot that could be done this year to show people what's going to be possible in this era of patient-centric healthcare and AI, what would be the experiment you would design?

Daniel Kraft: Well, what I think is coming faster than we think is in this sort of hyperconnected age to, you know, all these wearables are becoming commoditized. It's how we make sense and synthesize the data. So, I like to use the example of our modern cars, had three or four hundred sensors. And we don't care about any sensor, but the AI software gives you a "check engine" light with some exhaust. Hopefully, that means you're proactive and you take your car to the mechanic for a blown gasket.

Could we start to see some pilot systems which look at your connected home data, through Alexa, through your smart mattress, through your wearables, understanding your own mix, to kind of give you a 24/7 surveillance of your particular exhaust, and your baseline information, start nudging you in the right direction to get you on the path of health and wellness, or to manage, let's say, expensive patients like Type 2 diabetics which end up costing the healthcare systems, with a lot more morbidity, and challenge, and suffering as well. That might be a little pilot.

How could we take a systems medicine and a systems biology approach, and connect these dots? We're starting to see some companies do that, like Arivale, founded by Lee Hood, or Longevity, Inc., or Preventure.

Michael Krigsman: I want to remind everybody that you are watching CXOTalk, and we're talking about AI and data in healthcare. And right now, there is a tweet chat going on using the hashtag #cxotalk, and you can ask your questions directly of our truly amazing panelists today.

Daniel, I have to ask you, what product is it that you're using that shows your EKG in real-time?

Daniel Kraft: Oh, this is ... I'm wearing a patch from somebody called Vital Connect; this little sort of band-aid sized element that's disposable. These are moving into the hospitals now to monitor patients who should not be on monitored beds. I can wear this for about a week. And again, it streams to my smartphone EKG, temperature, stress level, other elements. And, you know as an example, an intense security in the level of streaming data that could come up on my body 24/7, it's a bit of a "So what?" unless that data can flow, let's say, not just into my electronic medical record, but a smart medical record system, that it can be parsed with machine learning and AI so we can figure out what changes there might be, that might be evident to what I like to call the "predictalytics," predicting I might be heading in the wrong direction, or it can nudge me, or move me back to a good direction.

So, there's several of these types of smart band-aids coming out. So we can start to measure all sorts of things. The challenge is what we do with the data, because a lot of it is how do we blend it, put it part of the workflow for and overwhelmed clinician, who doesn't want some other data flow they can't manage, that he's liable for.

Michael Krigsman: So simply having a set of products that generate all kinds of data is not useful, unless we have that entire chain built in, involving the right data sources, the right software, machine learning/AI software to parse through that data in a meaningful way. We have clinicians who know how to use that data, and we have a regulatory and practiced environment that accepts the use of this data, and that has parsed through the risks that are involved, so that it's legally safe for physicians to make use of this data.

David Bray: Well, and actually that's what I was going to ask Shari, because I know Germany, again, having tougher laws. As a CIO, what would be the experiment that you would want to design for medical practice and data outcome, and how would it be different say in Germany, versus a different [country] because of the laws?

Shari Langemak: I think the biggest challenge here in Germany is how to prove that innovation actually has a big impact on the outcome, and to fight reservations from our physicians here; not only physicians, to be honest, but from many, many Germans. So, I think we need ... So, the thing I hear the most of the argument against innovation, against AI, is mostly we don't have enough data. So, I would like to see an experiment where physicians, and AI, try to diagnose something here in Germany, and compare it with the physician alone, and show what we can already do, how we can already improve the outcome for certain types of patients to show that there's actually a huge impact on innovation in healthcare, and a huge possibility to reduce costs for our healthcare system.

Daniel Kraft: Well that's starting to happen. I've been involved with the X-Prize, and designing a new X-Prize: the Medical Tricorder X-Prize, which is to build a sort of home dignostic device to consume, blended with AI, is an example of one medical tricorder that was part of the X-Prize competition from Scanadu, which entered a clinical trial; I think these are closing now. But, you know, usually you can collect this data at home, which used to require going into a clinic, or an ER, or an intensive care unit. Now that data can go through your phone, AI can start to look at, "What's Daniel's normal baseline? How's that changing? If I'm getting a really sensitive infection, how can this help pick that up?", communicate that to a medical team in a smart, proactive way, and so that's ...

And so, I think part of this future is how and where we collect this data, how the consumer, or patient is empowered to own their own healthcare data, share it when they feel like being a data donor, a lot of new things are going to come through these smart sensors, and clincal lab tests as well, not just vital signs.

Michael Krigsman: What do we need to do, or what do the stakeholders need to do, the public sector, and the private sector, in order to encourage all of this innovation, and create the right type of environment in which in can flourish?

Daniel Kraft: [Laughter] Well I'll take one example. I think it often comes down to financial incentives, right? If you, as a patient, can pay less for your insurance premium, if you agree to go through an AI chatbot before you call the triage line or show up in the ER, that might encourage some adoption. We just saw, in San Francisco, launched yesterday, a new company called "Foward," funded by Kleiner-Perkins, Google Ventures, and then [...], who are just trying to make the clinical practise of the future. I think that's under 7-8 hours to a month, you can have unlimited access. And when you go there, they have big touchscreens where you can display the data, they apparently are using AI, and it seems like they listen to patients in the clinic and help provide suggestions to the clinical team. So, I think we're seeing early evidences of this. And you can get to that smart, concierge-type practice, at a very low price-point, under $80 bucks a month, unlimited access. That's going to be disruptive to regular payers, regular hospital systems, and physicians, and they drive a lot of this adoption, especially when you see you're getting better, smarter care, based on your own data, your own [...], your own behavioral type, and that the user face matches you.

Another AI element is these smart coaches, because you can diagnose a patient or prescribe them a therapy or other intervention. Half the people don't take their medical intervention. Now we're seeing these AI chatbots and coaches that can track you and incentivize and nudge patients, hopefully in a more personalized way that is part of this blend of AI and machine learning, and user interface that will help drive smarter and better outcomes as well.

Shari Langemak: I can only say here in Germany, we just have recently started to [explore] this potential in e-health and digital health. We just have introduced e-health law last year, and it shows what we basically need. We need incentives, financial incentives, and financial penalties to move a very old sector into the future and help them to adapt to these changes, because we tried it for a very, very long time. Anybody who listens from Germany probably knows how long it took us to start with the electronic health records. It's really a shame. [laughter] So, we really need these financial incentives, at least in here, and we really need politicians who are better informed about technology. And I see that there's some discussions starting; more and more politicians try to talk to young entrepreneurs, starting to talk to companies, go to the Valley, see what's up there so we try to catch up now and hopefully, we're there soon as well. [laughter]

Daniel Kraft: Maybe I'll pitch this quick question to David. You know, we have 4G or something, and now 5G and 6G are coming, which doesn't mean our smartphones and our wearables and our digital can exhaust can be streamed at 100x the data stream. So hopefully, pretty soon I think. So we need the SECs of the world to help enable this data to flow, whether it be a need for a data pass for healthcare data; all these privacy issues are critical, how do we layer things like blockchain on top of this to make data more shareable and safe. Where's that heading?

David Bray: Right. So recognizing I'm not a commissioner of the SEC, nor am I Congressionally-appointed. I can say you're right; 5G and more in the industry is around the corner, and it's going to start releasing in stages. 5G is interesting, because you can actually do structured data elements within the signal and the message itself, and so you can actually say this part of the signal can be shared for these purposes, or I'm a type of first responder, I'm a type of doctor and things like that, so we can even have ad-hoc mesh networks. And so, it will be interesting both from a community perspective as well as a hospital perspective. How does that include the broader ecosystem of care that includes physicians, also includes first responders that are first at the scene for your health. Maybe there's a burning building, and you're unconscious, but your phone is still active: can they find you so that they can bring you to the hospital and things like that.

Also, we need to think about how we can use this to have smarter transportation of people's data, because as you know, the data's growing so large. If we were to port that file everywhere you went, that would be voluminous. And so, we need to start, like you said, thinking about how do you inform sharing? I, particularly, am interested in; and I try to tell people, and I try to talk about public services as opposed to just government. I think a lot of the innovation is really going to come from individuals in the public that are caring about this issue, whether they themselves have an affected family member that they want to get better healthcare for, or they're just passionate about making some innovation in this area, as well as public-private partnerships that are thinking beyond their own bottom line.

I do wonder if the world is changing so quickly that the traditional approaches to top-down addressing of these issues will not succeed, and so the question is what is an informed approach that does protect the consumer, does protect the industry, but at the same time, keeps up with the speed of change that is expected to happen. And I don't have any easy answer, other than to say what we did here at the FCC, when we had the FCC speed test app, is we made it open-source; and this was done in late 2013, and you can imagine given the events of late 2013 to say, "Hi, I'm with the government. Would you like to download an app that will monitor your broadband connection?", that probably would not have been well-received, except we made it open-source. You can see by design, we weren't collecting your IP address, and by design, we didn't know who you were in a five-mile radius. And so, maybe there are things that require public trust. We can expose what the algorithm is doing, or expose what is being done with the data, so you can see that we're doing privacy by design wherever possible, and then giving you informed choices as to maybe you do want ot share more data because you think it will help inform the cancer clinical trials that will make your loved one healthier, or you may choose not to do that because you value your privacy more than whatever other outcome.

So, I think we need to rethink how we've done public service, in the same time we're also thinking about how we address healthcare and other things like this. So it's going to be a very interesting challenge, and that's why I'm really glad that people [...] like Daniel and Shari are leading the way from the physician perspective, because really we got to let the experts lead the way as to how we address these issues.

Michael Krigsman: We have just about five minutes left, and we began this discussion talking about the disconnected pieces. I think Daniel and Shari, you said the dots need to be connected. And it seems like this is the fundamental problem, hearing you talk, because you've got the technology providers, you've got the physicians; there's all of these people working on it. And Daniel, you mentioned earlier that it will ultimately be financial incentives that enable the chain to be connected, that aligns the regulatory environment, and so forth. And so, in the last five minutes, I'd like to ask the three of you for your advice, both for the public sector, and for the private sector, regarding how do we create the environment where the dots can be connected, and we have a context to enable this to go forward and be used in practice? What advice do you have; specific advice?

Daniel Kraft: For example, I was in Washington this summer with Vice President Biden as part of the Cancer Moonshot summit, and a lot of the focus there is to do ten years of progress in five years of cancer, particularly in therapeutics. A lot of that was aligning the ability to share data, and catalyze that between pharma and academics, and hospital systems, speed up IP, intellectual property; speed up the FDA processes for new cancer drugs. Some of the lessons do get driven by policy and convening, getting everyone to agree to collaborate and connect the dots. The big HIMSS conference is coming up next month, where there are still all these issues about interoperability; a lot of systems just don't talk to each other, they're not incentivized to, so that can be driven by policy.

And again, on the smaller scale, every individual can start playing with little AI chatbots, and bring them to their clinician, and clinicians are out there starting to say, "What are tools that exist today, even if it's not paid for, or it's [...] that I can use to start enhancing my practice, or my touchpoints with my patients; so, not waiting for the future to arrive. Again, the future's already here, not just evenly distributed, a famous quote, and it's up to us to not just predict the future, but to create it using some of these new tools, and to catalyze that differently. In Berlin, which is an amazing startup culture, in Silicon Valley, and in other parts of the world, which, where everyone isn't wearing an Apple Watch or wearing the Google Glasses about you.

Michael Krigsman: So we have a question that's related to all of this from Twitter, and Joanne Young, who is a very experienced Chief Information Officer herself in higher-ed, is asking, "Does the promise of AI include halting, or reversing cost escalation?" So that, obviously, is a key part of it. Anybody want to jump in on that before we finish off with the remainder of the advice?

Shari Langemak: I think I like to take that question. So, of course, there is a fear that we have increasing drug prices, and even more personalized care through AI, because patients get very, very specific treatment, which is very costly. But, AI can help us to reduce costs in many ways, not only by reducing the rate of complications, but also by helping pharmaceutical companies to reduce the time it costs them to bring a drug to the market. First of all, we already have started working on recommendations for pharmaceutical companies to follow some sort of direction for a new drug, so they don't have to spend that much time on many different types of drugs to see if it works or not, and it also helps, of course, in the end, to get an FDA approval, for example, because we can use the data, use AI to see if a drug is safe or not. So, in the end, I strongly believe AI will help us to decrease cost in healthcare.

Daniel Kraft: And when, as the incentives shift, like right now in the United States, for Medicare, hospitals don't get reimbursed if a heart failure patient comes back within the first month, and there are several companies that are setting out mass networks in the patient's home, to look at the scale, their blood pressure, their activity, and have that sort of early "check engine" light in red, green, or blue; red, yellow, green; to help identify the folks that are moving from green to yellow before they get to red. And so, we can be much more proactive using this data, using the algorithms to pick up that digital fingerprint of someone falling off the deep end on heart failure, or mental health issues, for emphysema, and that can lower costs. We're picking up someone who's pre-diabetic, before they become diabetic, and putting it into these programs, like one from Omada Health, which is a digi-ceutical social network platform that can trim people around through behavior change, [...]. So, you can definitely lower cost by using the data in smart ways and leaving early signals that can change the course of a disease path.

Michael Krigsman: I love this idea of the digital fingerprint, and AI being used to interpret that data.

We're just about out of time. Shari, do you want to share your advice for making this all come together, and then David, we'll turn to you.

Shari Langemak: Absolutely. I can only agree with Daniel, but the future won't wait for us, and especially when we're talking about Germany or European countries, I hope that we start the discussion, that we start to get informed about these new technologies, and start implementing new laws that will allow innovation, and that prevent the risks that come with it. It's a topic that is not easily touched here in Germany, we sometimes believe that by avoiding the topic, we find a solution, [but] it's not like we can prevent innovation. So, I really hope that the discussion starts now.

David Bray: So, I will conclude with three thoughts, and enthusiastically agree with both Shari and Daniel. First, as a CIO, when I arrive, we are spending more than 85% of our budget just to maintain the legacy systems we had, and while we didn't see a budget increase, we were motivated to move to public cloud and new technologies, because we could see an efficiency at scale. So, I'm hopeful with AI, even if there's not necessarily an overt financial incentive, it could be just a legacy way of doing things as later shown it will be so expensive, you have to move to it.

Two: When we made that move, and people thought we were crazy to do it in 2013-2014, we need safe spaces to experiment. I mean, yes, there's certain parts of medicine that you have to keep going well, and keep the train going on time, but creating those safe spaces will be the key to show what's possible to bring everybody else along.

And third, it's going to take all of us. It's going to take physicians, it's going to take IT professionals, it's going to take public. This is a massive endeavor, and so I look forward to seeing what sort of almost ecosystems of thought, and action, evolve as a result of this.

Michael Krigsman: Alright. Wow, this has been quite a discussion. You have been watching Episode #213 of CXOTalk, and we've been discussing healthcare, and digital data, and AI, the regulatory environment, and a lot of other topics. Share this with your friends, because the transcript will be up on the CXOTalk site early next week, and it's a rich treasure trove of material. I'd like to thank Daniel Kraft, David Bray, and Shari Langemak, for spending time with us. And, we'll be back next Tuesday for our next show, and then we'll have a show the following Friday. The next Friday as well. Thanks so much, everybody. Thanks for watching. Bye-bye!

Virtual Reality: Innovation in Higher Education

Michael Mathews, CIO, Oral Roberts University
Michael Mathews
Chief Information Officer
Oral Roberts University
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Innovation and digital transformation in higher education means using technology to address core strategic goals of the institution. On this episode, we talk with the CIO of Oral Roberts University about their multi-million program to use virtual, augmented, and mixed reality to create a global educational community and foster the educational agenda.

Michael L. Mathews has over 24-years of experience as a senior-level IT executive bringing creative solutions that value the end-users of technology and business process management. These solutions have benefited the end-users of higher education, manufacturing, and high technology company products.

Mike has held positions as a chief information officer, general manager of CIOs, chief strategist for innovation, business development officer, trainer, teacher, and vice president of academic services for leading corporations and higher education. Mike has been a CIO within higher education for over 12 years.

Mike has a deep and rich work history including 12-years at Cray Research as an instructor and global training manager; as well as 10-years at SunGard Higher Education where he served as chief information officer, and vice president of academic services. In these roles he has influenced 100s of research, energy, chemical, and manufacturing companies, as well as over twenty community colleges, universities, and statewide systems. Mike’s dual experience in business and education along with working knowledge of seven Enterprise Resource Planning (ERP) systems has allowed him to quickly assess business process and technology innovation that creatively impacts products, process, students, and society.

Download Podcast

Virtual Reality: Innovation in Higher Education

Michael Krigsman: Welcome to Episode #210 of CXOTalk. I’m Michael Krigsman, an industry analyst and the host of CXOTalk. CXOTalk brings together the most innovative people in the world, who are doing just the most fascinating things; and we talk about leadership, we talk about the impact of technology. Before we begin, I want to say a thank you to Livestream, which is a company that provides our video infrastructure and they allow simultaneous streaming to Facebook, and email capture, and Livestream makes CXOTalk possible. So, thank you to Livestream, we love you guys. You guys provide a great product and a great service.

Today’s show is really interesting. We’re going to be talking about the impact of technologies such as virtual reality, augmented reality; and we’re speaking with Mike Mathews, who is the Chief Information Officer at Oral Roberts University. And we actually are going to do some technology demos today. Mike Mathews, thanks for being here!

Michael Mathews: Hey Michael, thank you for having me. What a delight to join you from Tulsa, Oklahoma on the most wonderful campus of Oral Roberts University at our new Global Learning Center facility.

Michael Krigsman: Well, I’m so excited. Mike, tell us briefly about Oral Roberts University, and just set the stage and give us context for what we’re going to be seeing today.

Michael Mathews: Absolutely. So, Oral Roberts University is celebrating its fiftieth year. And, we are more known internationally, so we draw students form ninety-five different countries from around the world, we have a hundred different programs, we have six colleges within the university, and we believe in something called “whole-person education,” where we develop the body, mind, and spirit and we call that “whole-person education.” And so, we are in Tulsa, Oklahoma on a wonderful campus, but we want to reach the world. What you’re going to hear today: we really believe that education can be transformed; transformed from not just saying “We did it,” or “digitized it,” but that we can reach millions of people around the world because there’s seven billion people around the world, not just 330 million in the United States.

Michael Krigsman: Now, I know that your goals are educational, and pedagogical, and community; and you use technology to support those mission goals. How does technology fit in? How does technology help you support your learning and educational goals, for example?

Michael Mathews: You know, one of the things I’ve done is to get rid of technology, and that may sound shocking to many people, but oftentimes on a college campus, at a university, there’s too much technology. And so I call it the merging of both the science of technology and the art of technology coming together. The science of technology, smartphones and anything you can imagine, is almost a commodity today and anyone can buy it, anyone can service it. But not anyone can actually imagine what it can do when it’s used effectively.

Michael Krigsman: Your use of technology; virtual reality; how does that fit into this picture?

Michael Mathews: You know, virtual reality and augmented reality is phenomenal from this standpoint. It took the telephone 75 years to reach 50 million people. It took the television 38 years, it took the internet 7 years, Facebook 3.5 years; but suddenly this year, when the internet in July hit its ten thousandth day of invention, augmented reality - Pokemon Go reached 50 million people in 38 days. And so, we’re trying to reach millions of people, and we understand that the digitization, the smartphone device engaging the learner is critical. So we decided to invest in the whole enterprise edition of augmented and virtual reality. We’re not about playing with classes, and technologies, and this brand, or that brand; we’re about, “How do we take something systemically, and make it available to millions of people around the world from a smartphone?” And you can see the virtual glasses that we have here, and you can see on the screen our full-immersion room that we’ll go into momentarily.

So we bought the whole enterprise edition. And Michael, what makes it exciting for us is this: Gamification has not benefitted education very much at all. In fact, it hasn’t made a dent. However, game casinos have made a fortune off of it. And why, is simply this: You cannot step in with the technology and ask faculty to change everything they do, the way they do it, how they teach, what they’re good at. However, augmented and virtual reality, from a pedagogical perspective, allows faculty to do what they do well: teach! Be experts in their field. And now, virtual reality is beyond flipping the classroom - we call it “flipping the university” - around the world. In supplement, what a faculty member wants to do versus replace.

Michael Krigsman: So, in order to have to present this virtual reality experience that we’re going to see in a moment, tell us what was needed behind the scenes to get it all set up?

Michael Mathews: You know, behind the scenes, it doesn’t take a genius. The technology, whatever you own, better work well. I challenged our staff, “Hey, we’re smart people, I’m not taking that away from anyone, but let’s be good at what matters.” And so, we followed some great partners out there who almost [did] the same business we did, of reaching the world. Wireless companies, we’ve decided, “Hey, let’s not us be experts in wireless. Let’s let a true wireless expert manage the campus.” Because, we’re dead in the water if the campus doesn’t function from a wireless perspective. In the first two years of a three-year initiative, it works flawlessly. And now we can take on these big initiatives and be broadcasting wide from our facility, reaching people with virtual and augmented reality. So, the infrastructure had to be flawless, and so we found the right partners to help make it flawless.

Michael Krigsman: So, let’s take a look at your virtual reality room, and put that on screen and so … What are we looking at right now?

Michael Mathews: You know, what you’re looking at, and I’ll turn it over to our employee Steven Guzman of Moment; you’re looking at two different things in one, big room. You’re looking at a catcher screen where we can bring in fifty students, and train them in multidimensional mixed reality, but then we also have a full-immersion room where people can actually go in and walk inside combustion engines, go out in the ocean, walk on an offshore oil well, go on an archeological discovery, fly over to France, whatever the case may be. We’re talking about half a million different learning environments that are backdrops to help the faculty member explain that which the textbook has always been trying to explain.

So at that, I’m going to give you the live example by turning it over to Steven Guzman. You want to introduce yourself, Steven? And let’s go live!

Steven Guzman: Hey guys, Steven Guzman here from Oral Roberts University. We are inside of an i-Cube, and we’re going to give you an example of what’s going on here. So first of all, just below me if you will, turn your camera around.

Michael Krigsman: Okay. So, yeah, what is this? That headset with the white bubbles?

Steven Guzman: So what Jesupalumi is wearing is a pair of glasses that have nodes on them that interact with the infrared lasers from inside the cube. There’s lasers that actually beam down from all the four corners. As he moves his gaze, the entire jet engine’s actually going to move, as you’re going to see in this example.

Michael Krigsman: And, how does this work?

Steven Guzman: So, what happens is the infrared lasers actually shine onto the nodes, and then the nodes react with smart technology. They say, “Okay, movement is going to happen so just move me if you wouldn’t mind? Go ahead and step outside the cube, and you’re going to see it happen as he moves his gaze.”

Michael Krigsman: So he’s moving his head. And, the lasers in the room are connecting, picking up the movement of his headset.

Steven Guzman: That’s correct. So, just go ahead and show them the lasers in the corners. So that’s one infrared laser. If you scan the camera, you’ll see the other lasers that are on the corners. Jesupalumi, go ahead and take off the infrared glasses, and move them around, just so we get  a feel for how they can move the model… So that’s him moving the infrared glasses around. Alright, go ahead and put them back on. Just so we’re doing that with about eight thousand learning objects that we have here on site.

So, we step inside this cube. Jesupalumi, I want you to come around the front of the jet engines slowly, and we’re actually going to take a walk inside this jet engine. The idea is we want to get people off of paper and doing 2-d, and we want professors to actually step inside this cube with their students and say, “Okay. Student A, point out the different abnormality in this jet engine?”

So, let’s go back to Jesupalumi first. Jesupalumi, go ahead and step on into the jet engine… So, come around, keep going… keep going into it… So, he’s going to say, “Student A, go ahead and pause.” Jesupalumi “Student A, what’s the abnormality in this pump manifold?” And Student A is able to identify that. “And Student B, what is an abnormality in this pump tank manifold?” And, you get the idea. Students can get a larger concept of what’s going on, what these extreme technologies, and they’re that much more valuable whenever they go and try to apply for jobs in companies like Boeing or American Airlines. They’re that much more valuable.

Michael Krigsman: What are some of the primary application areas? Or maybe can you talk about different application areas and how this impacts the learning?

Steven Guzman: Sure. So, initially, we see electric engineering with a jet engine but we’re also able to do things with surgery. So you can pick apart an entire body and that’s the value for our pre-med students, and our nursing students. We also can take you to an offshore oil well, or a derrick, where we’re able to see how the structure is created, and see how the drill goes down into the Earth, and how they’re able to pull that oil up. You think about architecture. You can go on-site, any building. We actually build the Global Learning Center in 3-d and we’ve taken a virtual walk through that as well.

Michael Mathews: If I could, Michael. [...] What’s important to know is that we’ll put faculty in front of the engine like this, or in front of a neuron, and they’ll actually jump into their expertise within seconds. We’re not asking them to change their curriculum or what they know, and that’s what they love about it, to say, “Put something behind me, and let me be the expert that I believe I am.” And as you see right here, the same exact thing. So, we have a professor who is a cancer research specialist. And he’s stepped in front of a neuron, and sixty seconds later, he’s telling everybody exactly how the sheath of the nerves work, and how cancer is caused, and how they believe that there’s a cure for it. And we have six colleges, and every discipline is able to take parts and pieces of this and leverage it to further advance the cause of building whole-person education at Oral Roberts.

Michael Krigsman: What’s the response from students? How do students react to this?

Michael Mathews: You know, the funnest part of the job lately is to actually bring students and let them speak about it. And, to the letter, to the T, they’re saying, “Somebody cares about our education now.” That there’s an investment in not in plasma screens - and again, this is no offense to any companies selling plasma LCD screens, or analytics, or something like that, because that’s a waste of a lot of people’s money and time, to be honest; it’s that LCD panels only make the presentation look better and sharper in focus. But when you can put learning objects in front of people, that the faculty own, and students can actually speak to it, it’s a whole different world. So the students are loving it.

Michael Krigsman: And how has it changed the learning outcomes? Have you correlated this yet to grades or test scores, or job placement, things like that?

Michael Mathews: You know Michael, we haven’t because we’ve only started this about three months ago, six months ago. However, we did all the research along with Eon Reality, who happened ironically to do a study right out of Tulsa, Oklahoma and found out that the time decrease in learning subject matter is 1400% less. When you can actually take something that someone needs to perform, it could be operating a power plant, it decreases it to that percentage. Now we’ve been conservative in that we’re using the number 400% improvement. And it doesn’t take a genius to realize it’s very doable and very possible. What we’ve watched now are faculty and students just jump into it, at the opportunity.

Michael Krigsman: And how about the learning curve for faculty to make use of this technology in the most effective way?

Michael Mathews: You know, the learning curve is … Pokemon Go did us a huge favor. Let’s face it. When you can see that many millions of people leveraging Pokemon Go, and we happen to come up with virtual reality glasses that look like this for smartphones, and have eight thousand learning object on them, the learning curve is about three minutes. And that’s incredible.

Now, we talk about changing the way pedagogically how do you start the introduction week of a class, all the way to 13 weeks and you’re testing students, that’s going to take a little bit of time. But the optimism here on campus is tremendous and wonderful. If ten brand new classrooms focused around reaching the world, including virtual and augmented reality, and the classes are filling up as we speak.

Michael Krigsman: So you are broadcasting these classes, then, around the world.

Michael Mathews: Absolutely. You know, when you think about education, only 6% of the world has a post-secondary degree. And there’s four thousand universities and colleges in America, and there’s seven billion people. If you divide the numbers of people being trained or educated, each university across America only has five thousand students. That’s embarrassing. That’s unacceptable. But when you can start broadcasting pieces of intelligence around the world on a card, through a computer, you’re changing the world. You know, and it’s time that we stop talking about analytics. Imagine me talking to a parent and saying, “Here’s a handful of analytics. Hopefully your son or daughter has a blast here.” Rather, we’re able to say, “Here’s a Fitbit, enjoy the ride. It has intelligence that integrates with the student information system. Here’s a pair of glasses that allows you to access intelligence.” We’re all about intelligence, because we’re training intelligent people to be intelligent around the world. Analytics is old news, and hopefully every vendor out there starts realizing and stops boring CIOs to tears with that conversation.

Michael Krigsman: So, you’ve got a set of metrics that you calculate very carefully related to digital transformation. Can you talk about that? What are the types of metrics that you’re looking at?

Michael Mathews: Absolutely. So we came up with a digital transformation index, three years ago. And part of the index was, “Are we considered world-class?”, and we knew the answer was “no” at that particular time. And what would give us “world class recognition” really was applying for awards around the country from a digitization standpoint. In other words, can every student have an online concierge service from their smartphone? And the answer today is “yes.” In fact, you’ll see a publication come on on January 18th, on e-campus, and it’s good to talk about that. Let’s not call something “Helpdesk,” because that sounds like it’s broken. Let’s call it “concierge service”: How do we meet the needs of the digital world? Even in my role as the CIO, and now Associate Vice President of Technology and Innovation, isn’t to try and impress people and change education, it’s really to try and help people who are in a digital world not only survive, but to thrive. And that we believe will help change the paradigm of reaching the world with whole-person education.

Michael Krigsman: So we have a question from Twitter. From Alan Berkson, who asks: “What expectations do you have for how educators will change their curriculum, based on this technology?”

Michael Mathews: You know, great question, Alan. But I’d say, we have no expectation whatsoever. And that was the breaking point. That’s the threshold you want to reach, because of this. The worse thing I could do it expect the faculty never to even push one button. As soon I expect them to push a button on teleconference, or telepresence, it shuts down the show. And so, we’ve literally automated the classrooms when somebody walks in a dark room, all the equipment senses somebody step in, and it fires up all the equipment automatically, and all they have to do is have an IT person present to enter a nine digit code to reach the world on their behalf. And at archives, the content, it puts it into the learning management system, which is D2L in our case.

Same thing with virtual reality. We don’t want them to change anything! That expectation is why gamification has never worked, nor will work. But, augmented and virtual reality will literally change the way they can enhance that which they already do very well.

Michael Krigsman: So, it’s such a natural extension of what they already are doing, that people just naturally, immediately get it. They grasp it.

Michael Mathews: Absolutely. What we have faculty members already watching NBA games or NFL games in virtual reality, or reading the New York Times with virtual reality. It’s a foregone conclusion. This is part of our society. And if we put the things in place that allow faculty to benefit, and don’t expect them to change anything, we’re home free. But on top of that, icing on the cake is students begging for it, and saying, “This is part of our world now.” So imagine being a student, you’re a senior in college or a university, and you’re going to get a job and you can stand in front of this engine and explain what it means, and you’re applying for a job at an aircraft company, your resume is going to look slightly better than everyone else’s.

Michael Krigsman: But, you know, to what extend do you say students are loving this, I mean, it’s fascinating, I can see why. But to what extent is it just because it’s kind of the cool Pokemon toy? In other words, does the blush kind of leave the rows, or is there a sustained benefit?

Michael Mathews: Good question, Michael. You know, it’s not just a novelty really when you think about it. What it’s doing is engaging a learner. You know, I wish I could have said fifteen years ago that we’ll never use smartphones in a classroom for studying, or grades, or accessing health information. But it’s a foregone conclusion we do it. And I believe the same thing: that what is called “mixed reality” in the future, “augmented”, or “virtual” is not the point. The point is that our society is changing in technology, Moore’s Law far exceeded years ago already, is changing the way that we can enhance our life. We can speed up the connection points around the world to educate everyone. And when we do that, suddenly we hit a tipping point. Three years ago, when I stepped on the campus of Oral Roberts University, the goal that got me here was really to come up with new paradigms to reach millions of people around the world. I fell in love with that. And, true to their form, Oral Roberts University has had the funding behind that, they have the support behind that, and here we are reaching millions of people instantaneously now. Now the only thing holding them back is paying their tuition, and I believe one day that may change as well across the world.

Michael Krigsman: Okay, let’s go back to Steven and Jesupalumi, who are standing by. And by the way, I have to … I want to say a thank you to Mark Orelen and to Zachary Genes on Twitter, because I tweeted out, “Can somebody take a screen capture of this, because it looks so cool!”, and they both did. So, thank you very much.

Michael Mathews: Let me add, it’s much more than cooler. You’ve got to come on campus here, and you’ll be more than impressed.

Michael Krigsman: I believe that. Okay, so take us inside that jet engine. Just take us into this thing that we’ve all seen the outside of, on planes, and take us inside.

Steven Guzman: So when we go over to the front, we’re in the jet engine… To the very front…

Michael Krigsman: I love that phrase, you know, “Can you bring the jet engine over here, please?”

Steven Guzman: [Laughter] Take a few steps to your right… And let’s go to the very front… Alright… Going to ease us on in…

Michael Krigsman: And I just want to tell everybody who’s watching that what you’re seeing here on the screen at the moment, on the right-hand side, is a kind of stop action on one screen of the outside of the jet engine, and then on the left, Jesupalumi is walking us through the inside. And you can see he’s got his iPad controller… So I guess I’m picking it up pretty quickly! It’s looks pretty easy because now I’m doing your job! [Laughter]

Steven Guzman: [Laughter] Yeah. So, Jesupalumi is playing professor right now. This is when we get into the education spot. It’s so special and what really makes us next-level with virtual reality is no longer do students have to try to conceptualize things they’ve never seen and only read about, but we can bring them there in virtual reality. And when we’re talking about jet engines, something so complicated, the professor can point things out that they couldn’t necessarily point out on paper because it just wouldn’t make sense. And we’ve actually had an engineering student worker come up here and test-pilot this jet engine. He said, “You know, the last three years in engineering have been fantastic, but being here inside of a jet engine brought it all together,” and it just put a nice capstone on what he’s been learning. So it’s really special to be able to get inside those jet engines.

Michael Krigsman: Again, you know, just think about the terminology. Mike, it’s really cool to get inside those jet engines. I mean, five years, ten years ago; five years ago, that terminology would be… It wouldn’t exist!

Michael Mathews: Unheard of! But now, imagine this: when you have a learning object, you’re buying it from some company, and you’re putting it into your learning management system. Now a faculty member can step in front of any backdrop; a jet engine or archeological finding; and create their own learning objects. Three minutes later, they have their own video and now they send this out to the students ahead of time if they choose to.

Michael Krigsman: So these learning objects, or maybe tell us a story of how you get these 3-d models that you can walk through, and how you adapt them? How does a professor adapt them for use in your environment?

Michael Mathews: Absolutely. So, what we decided to do was to make, again, enterprise-wide, “Let’s purchase enough of these [...] from the get-go, so no-one’s saying, ‘Is this the same as taught? I can see how this works for an engineering school or a nursing school, but not for a math class,’” or whatever the case may be. So we purchased eight thousand of them through Eon Reality, and they’re all on the smartphone. There’s a whole library of them now, so we can actually experiment, put them on a smartphone first, and then start playing with it: voice over the learning objects, save it to the learning management system, but then it also plays into the cube, or onto the iCatcher screen as well.

So the platform across every device that we own, we have eight thousand. Now, we’re already seeing faculty step up to the plate and create their own. We now own the software, so you have to think of it this way: Devices will come and go; iPads, tablets, whatever the case may be. But the operating system sticks around, so the Microsoft operating system has been around for thirty years. So we purchased the whole operating system or platform for virtual or augmented reality creation. We can do anything we want now, and faculty put their segmenter on it. Every time somebody clicks on their learning object from this day forward, they get credit for it. That’s pretty powerful.

Michael Krigsman: We have another question from Twitter, from Mark Orelen, who’s asking, “Is there an application for this type of virtual reality technology in business-related courses?”

Michael Mathews: Absolutely. You know, we’ve had numerous challenges just like that, and I’d say, “Give us a day, and we’ll walk you over; we’ll walk you into an Excel spreadsheet.” And it comes up on a screen, an Excel spreadsheet. And then, somebody says, “Hey, what about a PC? Could you walk us into a PC?” I’d say, “Yup! Come on over!”; walk them in and show them where the power supply is located. Somebody else comes and said, “Could we teach somebody in an international business class how to eat properly when they go to South America?” I said, “Yes, come on in. We’ll show you that as well.” And so we’ve not been challenged on any type of environment, because again, virtual reality is creating a new environment to walk into and it’s an amazing day to watch creativity and innovation come alive.

Michael Krigsman: What are some of the challenges associated with this? You know, when we do a demo, it all comes together in the smoothest way, but I know behind the scenes, nothing comes together in the smoothest way! So, what are some of the challenges that are involved for school to set this up?

Michael Mathews: You know, the challenges really are this: Have support from the executives. And I’m telling you: that when you invest a good amount of money in something like this, you need that support. But more importantly, you have to have the reputation. So the challenge really is build the reputation by taking the right partners that work with you, the right software. So we picked a company that already brought together education, entertainment, and industry with augmented and virtual reality for twenty years. We weren’t about to go buy a pair of Oculus glasses - no offense to Oculus or Vive - and think that that’s going to change anything. The technologies will come and go, but we wanted to own something so you can see exactly what you’re seeing now live. And, this isn’t rehearsed or anything. This is live. We’re showing you learning objects, but now imagine the students come in there.

So the challenge is really credibility. So, over the last three months, we’ve had faculty come over, students come over, and they’re becoming believers. But if we said, “Hey, we want you to change your curriculum and we want you to change Week Two so that you impart this, or import this into your curriculum,” it would have failed. So the challenges are: You know them all as well as I do, but if you build credibility, and you invest wisely, and you can deliver the goods, people become believers.

Michael Krigsman: So there was essentially a change management and buy-in issue that you had to gain the buy-in. Gain the buy-in from whom? Who are your stakeholders in this kind of situation?

Michael Mathews: Our stakeholders; our Board of Trustees, our president; they’re visionary people. But remember that before I came three years ago, they already laid the groundwork by having a globalization case statement that had a comparative number six that says this: Using new paradigms in technology, reach millions of people, with whole-person education. I love that. Now, it’s T’d up. They support it, they have the vision, but now they expect me to come up with the kind of technologies that would truly reach the world.

I’ll be honest. We had the Board of Trustees come in there and look at the facility recently, and two of them were in tears. Literally, as a first time as a technologist I’ve seen leaders like that in tears, because they saw it; they could touch it. And this is a different day, a fabulous day that we’re doing this kind of stuff.

But I will say one funny thing that happened, because when people are ready to sign a contract, that’s when they get, “Oh wow. That’s a lot of money.” And that’s true. And I said, “Hey, Six Flags down in Dallas just gave out the virtual reality glasses. Now when you go on a roller coaster, that’s the experience.” And it works to a certain degree. And I said, “Why don’t we put a roller coaster in our parking lot, and let people experience virtual reality?”, and they got the picture to say, “Okay, well that doesn’t make any sense. The insurance policy will be high.” I said, “You know, let’s do it right.” Again, the support was there; I appreciate our president Dr. Billy Wilson so much for casting the vision to reach the world.

Michael Krigsman: And I want to remind everybody that you’re watching CXOTalk, and we’re speaking with Mike Mathews, who is the Chief Information Officer of Oral Roberts University, and he’s describing ORU’s foray into virtual reality, augmented reality, mixed reality, and what you’re seeing on the screen right now, aside from Mike and me, is a live image of their virtual reality - what do you call it? Virtual reality cube?

Michael Mathews: Yeah, the screen on the right’s the eye catcher. It’s got a 3-d projector that catches fifty people can be standing there watching that, but the iCube has five people that can be immersed right into that, and it’s called the iCube.

Michael Krigsman: And, once again, if we can take a look at the Ethos glasses that Jesupalumi is wearing…

Michael Mathews: And Michael, I should note that Jesupalumi is a student from Nigeria, and he’s picked up this like you wouldn’t believe, that most students are. And Steven Guzman, who’s there as well is a graduate from two years ago. So these aren’t people that have been in the business for twenty years, they’re fresh, but this stuff is exciting for them, and they pick it up really easy.

Michael Krigsman: And I would imagine that from the point of view of getting a job when they graduate, having these kind of skills puts them way out in front of others, I would think.

Michael Mathews: Absolutely. I’ll have Steven talk to that a little bit.

Michael Krigsman: Steven, please, yeah.

Steven Guzman: Yes, absolutely. I’m insanely fortunate to have been able to work at Oral Roberts University right after I graduated in 2014. But, if you look online at the different job opportunities that are out there for virtual reality, they’re all out in California with Google, Facebook, I mean a lot of the big hitters, NVidia; so I have an amazing opportunity to be a part of what Oral Roberts University is doing, and really having the opportunity to change a generation is what I’m fired up about; just how everything that we do, from three months ago, a year ago, and on is going to change the next generation of Oral Roberts University students is really what I get excited about.

Michael Krigsman: Now, what kind of glasses are you wearing? You’re not wearing the glasses that look like they came out of a science fiction movie from the ‘60’s. You’re wearing something else.

Steven Guzman: So, my glasses: they don’t have the fancy nodes on them, they actually are for the other viewers to step inside of the cube. So, when I step inside of the cube, my glasses are good for a change of perspective, but I’m still going to be able to view the virtual reality; because that’s where you get four or five people that can go alongside the professor. So, if a come inside here with Jesupalumi, his perspective is still going to change the environment of my glasses to allow me to see it.

Michael Krigsman: So you …

Steven Guzman: So you don’t have to be inside the cube to view it. You can stand outside. So, you can have ten, fifteen, two more people outside the cube viewing the virtual reality.

Michael Krigsman: I see. So, he’s controlling the virtual reality view with his glasses, and these are 3-d glasses I’m assuming.

Steven Guzman: Yes, sir. These aren’t your typical Ray-Bans.

Michael Krigsman: Yeah. [Laughter] And so, when you think about this, to what extent is your participation because it’s fun, or because you think you’ll learn more, or because you’ll get a better job, how do you evaluate it, in your mind?

Steven Guzman: So, the way I see it in my head is it’s a few things. So one, impacting a generation. But it’s also really fun to do. You know, I ask the question all the time, do you work at all? You know, but there’s the common slang, “Well if you have fun at your job, you never work a day in your life,” you know? And I have a great time working with Mike, and Mike is an amazing leader, amazing mind, brilliant, and we’re all so fortunate to have him here. But, it’s passion that drives us, it’s a group effort, it’s not just me in the VR room day-in, day-out. Mike’s here, and our entire IT team.

Michael Krigsman: And Mike, to what extent is this an IT-driven project, versus a more educational or institutional-led project?

Michael Mathews: You know, this is a campus-wide-led initiative. You know in fact, one of the greatest compliments I get from [the] President is this, “You know Mike, it’s interesting. You don’t even like technology.” Now I like that because it’s not about technology. It’s really about how do you align, not just from a strategic point, but just align good business sense with mission? And so, we’re able to now fulfill mission by just doing simple things that other people can do. And that’s one of my passions, really is to say, “Wait a minute. It’s impossible to be able to track Fitbit data and do all the stuff in life, athletics and so-forth that we can’t do in education.” So simplification equals multiplication. If we can simplify things and make it campus-wide, we’d multiply things.

Michael Krigsman: And how do you align the business goals with the mission; the business demands and requirements? You have to stay in business, you know, you require income and so-forth, how do you align that with the mission?

Michael Mathews: An absolutely good question. And we really do it by being simply aligned over, and over again, and repetitively, iteratively… So when you think of a president who has a strategic plan, most presidents would say, “You know what, I’m tired of a strategic plan.”, you know, because it changes, it sits on a shelf, so our president and the Board of Trustees has approved an adaptive plan, the University Adaptive Plan; it means it’s flexible, it’s nimble. We live in a nimble world that’s ever-changing, and how do we change with that? And so, as the IT leader, I have to be nimble enough, and wise enough to say, “Hey, I’m not going to do something that doesn’t make sense. And let’s be adaptive.” It’s like putting people in a digital world, to not only survive, but thrive. And when you can help the leaders on campus, you can help your students, you’re changing the world. You’re impacting in a way you never thought possible.

Michael Krigsman: We have another question from Twitter, from Zachary Genes, and his questions is similar to the one that we received earlier, which is… He’s asking, “What about applications in areas such as storytelling and marketing?”

Michael Mathews: Absolutely. Again, a great challenge is we have that. In fact, somebody challenged [us] and said, “Hey, is there an application for public speaking?” Can you teach somebody to public speak? In fact, one of the faculty members, Denise Miller, has 325 students wearing VR glasses, [who] can actually practice speaking. It’s one of the most fearful things people have done to take.

And marketing is a phenomenal one, because you want to talk about helping people understand marketing. Take them to a couple places that failed in real life. We now have Google Earth and virtual reality connected together. We can take people right inside - I’m a Green Bay Packer fan - so we’ll say Lambeau Field, Green Bay Wisconsin, and say, “What would you do if you were a marketing person, and you want to market this better?” We can take them to McDonalds. We can mock-up almost anything in this real life and let people start experimenting without doing the travel. In fact for our athletes, we’ve got a professor Kerry Shannon using the same connectivity sim virtual reality, to use it for athletes who aren’t on campus, or off at a basketball game or a soccer game.

Michael Krigsman: I would imagine, and maybe this is an obvious point, but I would imagine that modeling physical locations, objects, architecture, is far more easy than somehow modeling concepts. Like, when Zachary Genes asked about storytelling, can you model abstractions like that in some way?

Michael Mathews: I believe, so … In fact, if we brought up an image, we could show you the music playing along with the image. So when you go underneath water, you can hear “Jaws” start playing from the movie “Jaws.” But we could have changed the music, we could have played some different multisensory. But talk about storytelling: how do you create a story? Well, there’s a place involved, there’s maybe some sensory behind it, but you’re starting to create it. And so we have it now where we can take you out to the planets, and actually create the right story behind how the planets were created. We can tell the story about how fast the planets revolve around the sun. We can create stories and modify and apply it. Why? Because we own the software, the platform in order to do that.

Put it before us, and we’ll find out if we can create it. In fact, Steven Guzman said he’s two years out of college, the university here, and it took him all of four hours to create a whole story for our donors about what our new dorms will look like. Never done before. I say it’s eight hours, he says it’s only three hours. So, somewhere in between there.

Michael Krigsman: And what are the technical skills needed to create this type of environment?

Michael Krigsman: You know, boy it’s a tough one to answer. I’ll say this, is that 35 years ago, the genius of the day was somebody who knew enough to take Xerox files and be able to port them onto different operating systems and devices. Today, the genius is the person who can take almost any object, infrared scans, videos, pictures, and start recreating a story and movement in virtual reality. And so we seem so far a little programming skills are required, a lot of imagination, a little bit of script writing; but the good news is, we own the platform that has a document this thick to say, “Here’s how you start moving things around.” And we now have it where we can actually reach into a body in front of the mirrored virtual reality screen and pull out my heart, pull out my brain, and turn it around. It’s pretty phenomenal. And that’s all created probably within days, and we will actually be having a school here starting January 23rd to allow people to learn how to program both augmented and virtual reality.

Pokemon Go, hey I get it, I know why so many people use it, but it’s pretty useless to me because it’s a bear in the air, it’s placing something, geofencing. But, what if we could put all learning objects on a card, or in space, and start leveraging it that way?

Michael Krigsman: So, the goal is for subject matter experts to be able to create these realities, rather than a dedicated programmer or technologist?

Michael Mathews: Right, but think of the subject matter expert. People become a professor because they believe they’re a subject matter expert, and there are professors, I understand, who maybe regurgitate other people’s information, but there’s a lot of professors who actually create their own content. And those are subject matter experts. So, we already have a professor, one of the deans, a department chair, sitting on the computer creating their own experience, and they’re creating something so their students can do clay modeling in virtual reality.

Michael Krigsman: We have about five minutes left, and I’d really like to go back to the digital transformation metrics that you use, because I think that one of the challenges of digital transformation in any environment is how do you measure the outcome, and can you talk about some of these metrics? It has certainly come a long way from traditional IT metrics of, you know, latency or system uptime; and in fact, your metrics have to do with the exact intersection of the technology adoption, and achieving the business goals.

Michael Mathews: Absolutely. And so, we did a baseline. Again, no measurement matters at all if you have no baseline. And your baseline may even not be accurate at first, but the fact is over time, you can find out and tweak it to make sure it is accurate. And so, we took a look and I’ll give you an example: If I have a miracle happen in my life, I’d tease people. They say, “Mike, do you believe in miracles?” I say, “I never really believe in them, but I rely on them. In order for my career to go places, I’ve got to rely on miracles. And if a miracle happens in my lifetime, where I get to see something put on a smartphone, was I prepared digitally to take advantage of it?” And so we did that in numerous cases, to say, “If there just so happens to be something from an academic perspective the equivalent of Pokemon Go, is our infrastructure, and our capability to leverage it ready to go?” And the answer’s yes. And so, 100% of everything Oral Roberts University owns from an electronic standpoint, is all accessible and integrated on the smartphone. That’s transformation.

Three years ago, that was not the case. Three years ago, we weren’t given work with good partners. Three years ago, we didn’t have the infrastructure, but suddenly that index starts putting carrots in front of people in a place that starts thriving to become. And again, this isn’t to brag, but it is to say we’ve been fortunate and we won global and national awards for our IT invention, and innovation. In fact, we’ve just trademarked the name Geonetics, which is to say if technology is so powerful that it can implode a country like Egypt, Tunisia, and Libya because young people are waking up with it and saying, “We don’t have to live like this anymore.” That’s pretty powerful. Are we ready, and can we actually take everything and leverage it in the same way, and use that power for the betterment of humanity? But who understands it? If technology companies’ only interest is to keep selling more products, we’re all in trouble. But if our job as leaders and innovators is to say, “Wait a minute. Let’s not just worry about selling more technology, let’s actually implement less, but do it wisely,” and that’s exactly what Geonetics is about. What can people learn and take advantage of not just so they can own a smartphone, or another big plasma screen? Wisely investing and aligning for the betterment of humanity?

Michael Krigsman: And in the last couple of minutes that we have left, can you talk about adoption? How do you think about adoption? How important is adoption? How do you encourage adoption? How do you measure it?

Michael Mathews: You know, adoption …. You know, I’m fortunate. I’ve been through helping the university four cycles in my lifecycle already. But by the fourth time, I’ve learned a couple valuable lessons. One, don’t mess with faculty. Encourage them, leverage them, let them do what they do well, and support what they do. And so, that’s not even a showstopper for us, okay? So they adopted because they know my interest is their interest. If I can help faculty save seven hundred hours every semester by taking educational data off of Fitbit watch and putting it into our gradebook, I’m a hero. And that’s what we’ve done.

Now, I’m a hero not because I’m smart, it’s because I’m listening to them, “Help us be more efficient.” Don’t just make yourself look good, don’t just brag about technology. Hey, if you won a couple awards along the way, that’s great. And so, adoption happens by winning awards, that’s for sure. But, when you can be advertised in over 500 newspapers and magazines in one years because of innovation, you’ve just adopted everyone; because now they become believers and say, “Wait a minute, we thought that was just a vision. We thought that was just part of the strategic or adaptive plan, but it’s become reality now.” And so, after a three year period, let’s say, we’ve been successful with the help of our partners to be successful on five major initiatives that have changed the way people view things. Now, three years later, that index keeps growing, the digital transformation index in the right direction, but it also makes believers who if they want to be a part, get more people on your boat and you’ve got a better boat.

Michael Krigsman: Okay. We have been talking with Mike Mathews, who is the Chief Information Officer of Oral Roberts University. What a fabulously interesting show this has been, but we have to say thank you to Steven and Jesupalumi. So guys, why don’t you come over and just … we just want to say, “Thank you.” I see Jesupalumi is there in the background.

Steven Guzman: Jesupalumi, come over! [Laughter]

Michael Krigsman: And, what an image of these two guys with their glasses. Thank you so much, we really appreciate your time today.

Jesupalumi: Thank you! We thank you for having us!

Steven Guzman: Thank you sir. It’s been a pleasure.

Michael Krigsman: Thanks a lot. And, Mike Mathews, thank you, we really appreciate it! Everybody, this has been Episode #210. Next week, we have two shows, so check out CXOTalk.com/episodes and you can see the schedule. Have a great week everybody, and we’ll see you next time.

Digital Transformation in Insurance with Harford Mutual

Tim Baum, VP and CIO, Harford Mutual Insurance
Tim Baum
Vice President and Chief Information Officer
Harford Mutual Insurance
Dion Hinchcliffe, Chief Strategy Officer, 7Summits
Dion Hinchcliffe
Chief Strategy Officer
7Summits

Although insurance has long been protected from digital disruption, the industry faces serious challenges in its current form. This episode will explore the digital transformation of insurance with a leading industry CIO.

Tim Baum is the Vice President and Chief Information Officer at Harford Mutual Insurance. Baum leads the IT department as it continues to convert the company’s legacy systems to the Guidewire Insurance Suite. Baum’s extensive technology project management experience allows him to seamlessly take the helm of the Project Team and assist them in the successful implementation in 2017.

Prior to joining Harford Mutual, Baum was the Division IT Director for Zurich Insurance Company Ltd; Vice President of Corporate Development at FEI Systems and Vice President at T. Rowe Price. He graduated from Covenant College with dual degrees in Information and Computer Science and Business Administration and earned his PMP (Project Management Professional) from Project Management Institute.

Download Podcast

Digital Transformation in Insurance with Harford Mutual

Dion Hinchcliffe: Hello and welcome to CXOTalk, Episode 211. I'm here to talk about digital transformation in insurance, and we have a very special guest and a dear, old friend of mine who will speak to you shortly. So I'm broadcasting live from Genoa, Italy on a road tour, with a European host on your show, but broadcasting from the United States. So I'd like to talk to on this episode Tuesday, January 10th at 1PM Eastern Time, I'd like to give a welcome to Tim Baum. He is the VP and CIO of Hartford Mutual Insurance, and welcome to him.

Tim Baum: Thank you, Dion. Good talking to you.

Dion Hinchcliffe: Yeah, so we have known each other for twenty, some-odd years now at this point, and we've been involved in a highly regulated industry, and I want to welcome you on the show [...] your position at Harford Mutual Insurance, and I want to talk about disruption that everyone's been talking about. And we'll discuss how insurance is being affected by the changes in the digital world, how IT is being transformed, and kind give us a perspective. We had Alexander Bockelmann recently from UNIQA Insurance, and he gave kind of a European perspective, and I was hoping to put it on you to kind of give us a North American perspective on insurance, which up till now, was a relatively protective industry. And so, why don't you take a few minutes and tell us briefly about yourself, about Harford Mutual Insurance, and your role there.

Tim Baum: Okay, so Harford Mutual; it's a regional carrier based on the East Coast. We're in about seven states plus the District of Columbia, we have about eight different business lines, all commercial, we don't do any personal lines. But the company; we're at about 185 million in direct-written premium. The company, as I've mentioned, is 175 years old. So we've been around for a very long time. I think we have a great opportunity here, in that we've been around so long we've got established relationships, we sell direct through agents, and there's just a lot of opportunities. I think the regional carriers, the technology side of the house, has always been lagging somewhat in internally-developed systems, homegrown, and as the industry has grown, there have been more packages available to use, the integration of those packages. The confluence of digital technology has really started impacting what is going on.

So I've been at Harford Mutual for about four months now, and it's a really exciting time. So one of the major things we're working on is core system replacement. So we had a system that was built back in the 90's, early 00's, developed internally, the resources are still here who developed it, and we're taking that and really trying to move into the 21st century. So, taking that technology, we're starting to improvise on the guidewire package. So if you got there and looked at the insurance carriers, Guidewire's one of the main players in the industry that we've decided to work on.

So, within that, we're really trying to then form a strategy around there. So, just a lot going on, a lot of excitement, a lot of opportunity not only from a technology standpoint, but from a business standpoint there's a lot of technology over that.

Dion Hinchcliffe: Yeah, so are you finding that to be commercial insurance packages, are they supporting the kinds of digital capabilities that we're seeing very rapidly emerging in the space? There are lots of very exciting things happening in digital payments. You have blockchain, you have Internet of Things, all of which we're learning has an impact on the insurance industry, so you know, what are you finding? Is this future proof, or are you building on a foundation that's just not going to move fast enough?

Tim Baum: That's really interesting, because I talk to a lot of people. One of the interesting things is a lot of carriers, they're all doing core replacement systems, and they're all two, three, four years in. And everyone kind of talks about like, "We've just got to get this done, and then we're done." It's almost like, "Then we've hit the end route." Main analogy I always get is, "If you pulled up in your brand new, off the floor dealer, 2013 Buick, and you bring your friend over and you say, 'Look at my new 2013 Buick,' they're going to look at you like, 'What are you talking about? I mean, that doesn't have the variable speed control, it doesn't have the cameras on it. That's not new, but to you it's new!'" So I think one thing that Harford Mutual, and other carriers have to realize is, these core system replacements, it's not a one-stop and then you're done. You've got to do it, and then how do you continue to innovate? How do you continue to expand on the capabilities?

I think the industry also is kind of just with the carriers, and that they're struggling to try and get ahead of, "Do things faster, quicker, think outside the box and how you're doing things. And one thing that I'm thinking about is trying to think of different ways to develop. So get away from the traditional model of, "I have staff, I can use staff augmentation." But can you really look at incubators out there? Can you look at some type of code where you can actually throw something out there and have some [...] developed?

So, I think what happens is while these providers are trying to do it, they're kind of in the same quandary that we're in. And so, how do we get a foundation to build on some of those products, but enable it to move forward? I think seeing some of the press releases from some of the software developers out there, they're trying to figure out how they can deliver faster solutions so that you can go into new states, you can go into new lines of business. So, they're kind of doing the same thing, whether you go to cloud, whether you go to hosted. So it's a combination of how do you... It's also how do you go to, do you use a full-stack, or do you build pieces of it? Do you use AWS and you go out and build some things together and plug them all together, give you a solution and how fast can you do that?

So I think one of the challenges is going to able to be nimble, but yet, still be able to work within a regulated environment.

Dion Hinchcliffe: Absolutely. Well I want to really kind of tease that apart later on in the show. Being nimble and being regulated don't necessarily go together, but there are some interesting things that are happening. So, you're primarily a B2B insurance company. Is that correct?

Tim Baum: Yeah. Basically, you could say that. So we deal with the...

Dion Hinchcliffe: So ... Go ahead.

Tim Baum: So I was going to say, so we deal with agents. So we've got a network of agents. So we're mutual, so we deal with a network of agents that we've appointed to sell within the states that we're in. We don't go direct. But as I think most people in the industry will say is, does that business model go in the future? You know, as the consumer becomes more educated, can they go direct, or do they need that ... How we view the agent is the agent is kind of the educator. So they help ... So we're selling to a business. So does a business owner really understand the risks that he has? Does he really understand the products that he needs to protect himself? And an agent kind of provides that information.

Now can technology change that? Can someone go and get that same information, guide it, and automate it through the internet? I think that's still something to be seen. I think also the ages, millennials coming up: what are they going to learn? Where are they going to get their information? What are their ... Who are their trusted advisors? What are their trusted advisors? And right now for us, it's those agents that have the relationship.

Dion Hinchcliffe: Yeah, and so how does that affect the digital experience? So you've got a lot of design services that deliver value to big agents, and deliver value to the customers, and you want them to prefer to offer your services. Again, you're in highly, I mean a ruthlessly competitive environment. How would a company like Hartford, you know, offer a digital experience that can compete with the big guys?

Tim Baum: It's funny you say that. So the president and I always banter back and forth, because my position is every company is an IT company. We're an IT company that just so happens to sell insurance. He'll banter back, "No, we're an HR company that just so happens to sell insurance." So the human component of it is really important. So it's the relationships that we have. So, we're really involved in the community, we're really involved with the agents because that's an important portion of what we do. The people that work here are really committed too. Our slogan is, "Committed to mutual success." So, we really do want to be successful, but we can't be successful unless you're successful. So then, my job is how do I bring that technology to enable and foster that to happen? So, if you have great people but you have bad technology,  it's not going to go anywhere.

So, one of the things that we've recently done is, I'm actually moving one of my individuals out of IT and moving him up into the marketing [department]. So, he's not going to be like a Chief Digital Officer, but he's basically going to be playing a similar role in that he's basically going to out there, and meet with the agents and the customers: "What do you need? What are those functions, what are those services that you need to better enable you to know what you're buying, what risks you're covering? What am I getting for my money?" And so, for him, knowing the technology side of it, and kind of getting in on the marketing side, I think it's going to help us tease out what we really need in order to deliver those systems, those apps, those whatever that can have a better experience. It then will fall back on us in order to construct that and lay that out.

So, that's kind of one of the approaches we're going with because things are changing, and how do we know what the needs are, unless we're really meeting with the users and listening to the users? And to us, that's face-to-face. That's how we're really going to understand what the individual needs, how we can differentiate ourselves from our competitor.

Dion Hinchcliffe: And that differentiation is your strength because of your 175 year old history, that going way back in the day, that you know, relationships face-to-face is what you're actually good at, and are you going to kind of bring the digital version of that to bear, or are you really going to use face-to-face the legacy way and kind of reinforce the differentiation?

Tim Baum: So, it has to be face-to-face, but can it be virtual face-to-face? I don't think we believe you can do anything, that if everything becomes blockchain transactions, where is that human interaction? Then I become a commodity. And, I don't want to be a commodity. I want to have that relationship. So, how do I have that relationship? How do I make that relationship so when we're headquartered in Harford County, Maryland, and we've got something in North Carolina - our president. I'm sure everyone's probably heard of all the wildfires that hit in Tennessee over the past couple of months. So we actually did a lot of insurance and [...] our president is going down there to meet with some individuals, because of the impact that it's had on the community, and our involvement with them.

So we're just not an insurance carrier that you have a claim, we're just going to pay the claim out. We're concerned about the communities that we actually insure with. We're concerned about the agents that actually sell those things. So, it is that personal touch.

When someone calls, one of the things that probably differentiates us: If you call Harford Mutual, you will not be put into a phone tree to get to someone. There is actually someone, Norma, who sits at our switchboard and she answers the phone. And she will direct your call where it needs to go. And it's that type of mentality from the president down. We're not going to go completely digital. There's a human interaction. We want you to know that you call us, you're not just being pumped through some AI system that going to ratch you to the right place. We want to be able to hear you, we want to be able to talk to you, we want to be able to resolve your question. It can't all be resolved through a system. We might like to think that, but there's nothing like human interaction.

Dion Hinchcliffe: Well that really does set you apart, not having a voicemail tree. I think you've got to be one of the last holdouts to not actually have that. That's great to hear! So, please, we have Tim Baum, CIO and CTO of Harford Mutual Insurance on the line. We'd love to take your questions on Twitter, using the #cxotalk hashtag. Please ask your questions about IT, about digital experience, about digital transformation in the insurance industry.

So Tim, we're seeing, though, that certain new versions of new models of technology like social media and social networking, and online video, they're bringing a lot of that human dimension to bear. So we're seeing that even insurance companies are utilizing, they've got to speed up, they've got to do more. And we're seeing the introduction of roles like the Chief Digital Officer in the insurance industry to kind of take and say, "Digital's going to be a P&L soon enough in our industry," never mind mind that we have to really, continually up our game. It's a treadmill now. You can't slow down, so you need someone really in charge of market-facing digital. Where do you guys fit on that point of view, and what are your thoughts on that?

Tim Baum: So, yeah. I totally agree that that's an extremely important role. Now we're a smaller organization, so at the C-level, we don't have that Chief Digital Officer. That's kind of a combination of myself, and this individual that I'm putting up in the marketing area. When I think of digital, I really break it down into you've got your predictive analytics, you've got your customer interaction, you've got the system that is actually delivering the operational systems, and then you have potentially robotics. I think within our realm and how we're structured and how we're sized, the robotics piece of that digital revolution, I really don't see us playing much of a role in that.

Now the other three quadrants, they're actually, from a strategic standpoint, they're what we're working on right now. So predictive analytics: we actually purchased a Guidewire product called Eagle Eye to help us with predictive analytics, help us on the claims and the policy side. So we're actually going to start analysing the data, understanding where the risks are, because if you can't then there's business you don't want. There's business that you want that you might not be selling to, and there's business you don't want. So how do you emphasise and target that business that you really want? And that's where some of the predictive analytics [come in].

So we have a whole project that we're working on that, within which we've got our core system replacement that we are re-doing how we do our billing, how we do our claims, how we do our policy administration; how is the underwriting? So we have that project going then also.

And then the third, to me the third leg is that customer interaction; so do your point of the social, the Twitter, the different accounts, how do we actually get engaged in that realm? And that's where we're really focusing right now on the portal. So, coming up with the digital strategy, where the portal is really the focus, where we can deliver things that are browser, that are device-agnostic solutions out to the agent and the consumer.

And so, those are kind of the three areas that we're really trying to focus on in the coming years, and this year specifically, try to get those things. Our core system replacement is the one that is farthest on the path of moving forward, and then I probably would say after that, we've got our predictive analytics that is next, and then the one we're really going to start tackling in the next couple of months is that whole digital customer interaction strategy that we really need to attack.

I think if you read some of the industry articles, they'll say about 83-87% of carriers are focusing in 2017 on that digital strategy. So in order for us to be around for another 175 years, we've got to be right there with them. We can't be that 13 or 15% that aren't worrying about it. Just because we've been here for 175 years doesn't mean we'll be here for another 175 years. How we conducted business back in 1842 is not how we're conducting business in 2017, and it won't be how we're conducting business in 2050. It will change. We have to stay with that.

Dion Hinchcliffe: Yeah. Exactly. And so, some insurance companies, they'll still feel like they're operating in some, you know, the 1850's, but I do think that's changing. Now we've had CIOs from very large insurance companies say that they believe that in 5-10 years, the classical insurance company will disappear, right? That's what they say. They plot forward the trends and who is buying coverage, how coverage is being purchased, how you can instrument vehicles, you can even instrument homes and see what the actual behavior is so that you know what you're insuring, right?

Tim Baum: Right.

Dion Hinchcliffe: And you can buy, you know, incredible efficient rates, and they get all self-serviced, and that makes the [...] self-insured, too. So, you know, that's the next wave in terms of insure-tech. So, what are you seeing? What sort of relationship between the relentless innovation that you have to undergo, the CIO and the future of your organization? I mean, I'm asking the big questions here, but we're all on the hot seat [...].

Tim Baum: So I think within the insurance industry, there's a delineation between personal and commercial lines. So the personal lines, I think you are. So you've got your analytics, you know, the things you can plug into your car to really say how you're driving. I think on the personal lines, technology, because it is more consumer-based, is probably going to move faster than on the commercial side. On the commercial side, in general, you know, you have larger contracts. You're insuring different things.

But, as you mentioned, the Internet of Things: Do we have companies that have equipment that we're actually insuring, and with the Internet of Things, can we actually pre-diagnose when a problem is going to happen to help shut that down? That gets actually into loss control. So that's a whole other aspect of how do we handle loss control? Can we get that education? Can we actually help the consumer know where the risks are, and how to avert that risk? Because while we insure them, the best scenario for us is they pay us a premium, and they make no claim for us.

So, if we can do ... If we can spend money up front that we can help educate them on how to not have a claim happen, the better off we are financially, and the more successful they are. No one wants to have an accident. One of our lines of business is worker's comp. No one wants to have one of their employees get killed or in an accident. Was is something that could have been prevented? Is there education that could have helped that? So how can we use technology on that? That's the area.

I think as far as what will insurance carriers look like in five to ten years, I think we always think things will happen a lot faster than they actually happen. I think because we're in a regulated field, we're even having to educate the regulators on how we do things.

I think one example is with drones. A lot of talk around drones. My understanding with drones is if you want to have a drone in your claims department to go out and assess - you've got damange on a building, you want to fly it around - the drone operator actually has to have what looks like a co-pilot with him on the ground, where they can fly that around to check out the damage. So now instead of sending one insurance adjuster out to check out the claim, I've got to send two guys? Is that really more efficient?

So, will that evolve where how the regulations can change to enable that to happen? Are there technologies out there that might disrupt, and drones are things almost an intermediary; something like a fax machine, that was really great when it first came out, but, you know, now....

Dion Hinchcliffe: Now, completely outdated.

Tim Baum: Except for insurance companies. We fax stuff out the wazoo. We love fax machines. We use a lot of paper. But how we take that paper, and how we actually digitize that, and move that forward? So I think there are areas of that where there also can be optimization. I think also there probably - not probably - there will be other lines of business that we're going to sell, as things evolve. I think also how we ensure it. So, do you move from, you're going to buy an annual policy, to are you going to buy a policy for an activity? Are you going to buy, to actually be more of a transactional insurance?

I think it will be very interesting to see some of the larger insurance carriers, how they try and handle that, because being large is more difficult to be nimble. I think one of the great things of Harford Mutual is we're small, but we think big. So we don't have a lot of the encumbrances that a large organization has, and allows us to be able to move and target things where we want to go and actually deliver where we want to go.

So, I think, you know, from a whole ...where is it going to be? You know, autonomous cars. Some people think autonomous cars are going to be out next year. I don't think you're going to have the freeways clogged with autonomous cars next year. Now, will they eventually get there? Yes. But, it's not going to be next year, and I think that's the same with technology. There's so much data that is around. There's so many technology avenues that you can go to.

Dion Hinchcliffe: And that’s the thing. You know, there's so many choices, you know, and as Bill Gates famously said, that we tend to overestimate change in short-term, but we end up underestimating in the long-term. We won't believe what the world looks like in fifteen years, but we expect a lot of it to happen next year, right?

Tim Baum: Yup.

Dion Hinchcliffe: And that's the challenge. And so let's turn conversation down a different path, because what I said when you talk to organizations, as you were mentioning - you guys think of the big guys - [you] have tremendous legacy baggage. And that's one of the biggest obstacles to modernization and digitization, and kind of really operating the business. It's great to hear that you want to do the big upgrades and overall, your IT and business application infrastructure, but what did you run into? This is what people really want to know, is what obstacles? As you've been going down this path, you have actually been hitting all the different pain points on change. Do you have a lot of technical debt? Are you finding that systems all talk to each other and your budgets all go into innovation? Sort of the worst story, what can you share with us about why is this hard, and you know what are you encountering?

Tim Baum: So I'm smiling because, "technical debt." If we were on the other side of the technical debt, we'd love to be a bank, because we'd be raking in the money. I mean, we have so much technical debt here because things were developed so old. It's tough to want to upgrade a system, replatform a system, if it's working. What's the ROI on it, what's the CBA on that? It's tough. So what happens is you have systems that then keep running, and you almost have to get into risk management. So, it's ironic we're an insurance company, but how do you manage the risk of your systems that are out there? So we do have a tremendous amount of technology debt, that we have to do something about.

Part of our whole core system replacement was trying to tackle some of that technology debt that we have. When we undertook it, because we are a smaller shop, we have so much debt, and so much the effort to do the core replacement, is really overwhelming. So, we actually had to go from a greenfield approach to how we worked to a brownfield approach of how we were going to deliver the core systems. And so, we do have a stack of technology debt that we are going to have to deal with.

I think one of ironically one of the benefits is that as technology has changed, and has the ability to develop and giving .... So like coding camps: Do we have any means by which we could take one of those systems that is a technology debt, and could we throw it out there, or could we throw it to an incubator and say, "Here's the problem. How would you solution it?" Have a bunch of individuals go off with a bunch of Coca Cola and a bunch of pizza, and they go off for two weeks and they crank something together, using newer technologies, using plug-and-play pieces that again, from a corporate standpoint, we really don't think that way. It's okay, at Harford Mutual, we've adopted the Agile Scrum methodology.

So you know, that's even a little step ahead, because traditionally, it's always been Waterfall, and Agile's been Waterfall, and Agile's been, I would say Agile probably really been used in the last fifteen years or so is when it's really started to get some legs, and comes to starting implementing. But even from an Agile perspective, is Agile, you know, do you have to rethink how Agile works? And so ...

Dion Hinchcliffe: And the whole dev-op [...] now is really emerging to say that the post-Agile view of IT where we expand collaboration far outside the boundaries of development, meaning operations and a bunch of other areas so that we can continuously change, right? It's a very interesting discussion.

Tim Baum: It is, because ... So, in that, we're giving that, [let's] kind of jump back on the digital aspect of it. You know, there is the Kaizen continual improvement. So even though the technology is going to continually improve, I think the processes are going to continually improve. Industry is going to continually improve. So you take all of those, and they're all separate streams running down there, that are, you know, running at different rates and different paths and different flows. How can you cross those over to actually optimize what you're really trying to get done? I think it's a really exciting time. I mean, I think there's just a lot of opportunity that I think the... Insurance has been around .... I think most people would say Benjamin Franklin really had the first insurance company in, like, 1752 Philadelphia, when he did fire insurance.

You know, things have changed since 1852, I mean things have changed from 1752 to 1842 when we started. You know, how are those things going to progress, and with technology and with human capital, how do we do that, and how we understand how people ... I think also then learning and understanding, having a better understanding of how people operate, and what drives people, what drives people's interactions, what drives people's purchase patterns? How do we incorporate that into the insurance industry?

Ultimately, we're insuring risk. We're trying to take someone's risk, and we're trying to cover that for them. But, how do we do that? You know, getting back to what is insurance? Does insurance change from let's say I'm selling you a policy to more of a, "I'm offering you a service, and one of those components of that service enables you to manage your risk, so that the focus really isn't on ...

Dion Hinchcliffe: And all, you know, digital self-help goals are now a floating category; really just becoming a digital risk management coach, right? Forward to charge, you know.

Tim Baum: Yup.

Dion Hinchcliffe: So that's interesting. So this hold that thought: I want to come back to some of the things you just said, we just covered. Maybe you're not a territory, Tim, but about how new ways to change. You touched on some important pieces on a lot of topics in the moment.

So we're about halfway through Episode #211 of CXOTalk. We have a very special guest, the CIO of Harford Mutual Insurance, Tim Baum. We'd love to take your questions about digital transformation, about insurance, and insurance technology on Twitter. We've got questions about a challenge that Tim mentioned, please send them in.

So Tim, you were talking about hackathons, or your know developer festivals. How can you go outside and get access to innovation in kind of a contemporary way? You're always had outsourcing, and staff augmentation, things like that, but those are kind of old-school, compared to you can go and join a hackathon and get young kids who have a lot of great ideas, spare time and try something new. And you really see that. Michael Krigsman, the founder of this show, and the CIO of the Federal Communications Commission, David Bray, is also now a co-host of CXOTalk. You know, we were talking about change agents, and change agency. How do we tap into these ecosystems and the network of people that you have? You know, many agents that have great ideas, "I have better support than you." In fact, customers and you've got development partners, and business partners that have great ideas. How can you unleash them? So it sounds like you're going down that pathway, and you see this in other CIOs. How do we provide more sustainable change, when it comes to technology? It's changing faster and faster all the time, and we haven't changed our model with it.

Tim Baum: Right. So, you know, communication might be the easy answer. But communication, and it's a two-way communication, it's that follow-through. So again, having this one individual that is out there on the marketing side, actually talking to those individuals using technology: if we have boards, if we have ways .... You're always going to .... You know, you got to Amazon and you look at reviews; you could have the best product out there, they're going to get some one-stars on it. How do you get the chaff away from the true value that's in that review? So, how do we do that from the agent, from the ultimate consumer? How do we get rid of the chaff and how we really understand what that stuff is? I think that's where actually it's a human person being able to go in and analyse what value are they looking for? What is the problem they're trying to solve?

So within IT, you know, you always run into problems where the businessmen will come, and they try to give you the solution. They don't give you a requirement, they give you a solution. So, go back and tell me what the requirement is. So in that same sense, go back to the consumer, to the agent, and what problem are you trying to solve? What are those needs that are out there? I think then once we bring it internally, then you have that other half of it, so how do we solution that? How do we solution from an architectural standpoint, from a software standpoint, to an actual development standpoint? And I think that's where. Again, if we look at things more broad, and we don't look at things in a serial manner, we will have more benefit.

You've probably heard the concept of "fail fast." So, you know, go and try. If it doesn't work, that's great. Don't spend a year working on something and then say, you know, this really isn't ... We're trying to put a square peg in a round hole. It's not going to work. You know what? Break it out in a month, spend some money, and if it doesn’t work, that was a lesson learned.

Never fail. Either learn or succeed. So, you know, just because it wasn't successful, doesn't mean it wasn't successful, because you've learned something from that, that you should be able to apply that then in the future. And as long as you can take that and apply that in the future, you're going to become better. So, whether it's through wisdom that you garnered from it, whether it's from a speed that you were able to see and optimize on it, there is benefit, even in the face of what you might deem as a failure. But it's not.

Dion Hinchcliffe: Yup. Failure is one you don't learn from, right?

Tim Baum: Right.

Dion Hinchcliffe: Well said. Although, now we have to have so many lessons learned. I think that's our challenge, and that comes to... It takes us to our next point, which is we look at insurance companies: I’ve worked for a lot of insurance companies over the years and I'm pretty familiar with their culture, and it's not one that's really, you know, digitally-centric, fast-moving, you know? Insurance companies tend to be very deliberate, they're full of actuaries who try and make amazing precise evaluations or you don't make any money, right? And they're not exactly in an exciting atmosphere. So how do you build a digitally-savvy culture that's going to attract the right talent, and compete with these insurance startups that are ... I'm looking at some of the maps them. They're going to be breathing down your neck more, and more, and more, and they're going to be sucking in the best talent. You know, what's the CIO's role in making Harford Mutual Insurance an exciting place for top developers and top technical talent?

Tim Baum: So I think for us, it actually is somewhat easy, because we are a growing carrier. Our senior management staff is the best there has ever been. There is a commitment to the employee, there is a commitment to the community. The amount of investment that we're making within IT is phenomenal for the size of our organization. So, nothing is stopping us from a technology standpoint, so the excitement of being able to get out there and say, "How can we do things?" You know, don't say "Why?", say "Why not?", whenever you ask something.

Marcus Ryu, who's the president of Guidewire, he's got this quote that I actually told him I was going to put it up on our wall; I was talking to him a couple months ago. And it was, "Act with urgency, but not recklessness." And so, if you can instill that with people that, you know, you've got to act with urgency because technology is changing. There are things out there, there are people that are pushing, there are companies that are pushing so act with urgency, but don't do it recklessly. And so, I think if you can foster that within your organization, you'll have people that want to work at your organization.

I think also, it's the relationship between IT and the business. If IT truly believes that they are here to enable the business, and it is a partnership, don't you want to work with a partner? Aren't you going to be excited? So if we can have the technology to back ... So you mentioned about an actuary. I've met tons of actuaries and you're right, they are ... You know, in technology, you'd say you've got your geeks that want to program; but an actuary is probably the brother of, or sister of the geek that's on data, and they think numbers, they think patterns. But in order for them to do their job, they need data. And they need good data, so how can they get data? So can I get them the data that they're looking for, so that they can actually do their job? And again, I think I can in my organization. I think we can. I think we can provide that information that they can use to do their analysis; and then, it's building out the products. Again, what makes it exciting for us? What products out there do we want to offer? How do we structure those products?

You know, we want to go into more states from an expansion standpoint. How are we going into more states? What are the lines that we want to be able to move in those states? How do we take lines of business, and how do we make them more productive? True, predictive analytics? So, there is a whole confluence of things that are going on, specifically at Hartford Mutual that I think makes it an exciting place to work; an exciting time to work.

Dion Hinchcliffe: Well, but you're bringing up some really interesting topics. You're going to push up my next question about public cloud, so think about that for a second. But my question really is, it sounds like you're really getting involved in the business planning and the business strategy of the company, and try to figure out where you can grow and expand from a business perspective, how they get a role in there. And we see that the CIO is becoming more involved in leading the business conversation. Is that what you're seeing as well, or how do you view the CIO's role in guiding the business?

Tim Baum: I do, because what the CIO does is the CIO brings the technology aspect, and they can help solution things when they're talking. They can help widen the minds of the people that are in marketing, the people that are in underwriting, the people that are in claims of how they can do their jobs. So if I know somewhat of how they can do their job, and I can again be a partner with them, and I can raise questions, to be candid, I may come up with some really dumb questions of why do we do that? Why does the industry do that? Because I've only been in the insurance industry for a little over two years, so, you know, part of it I can say, "I don't know! I don't understand. Why don't we do this? So educate me." But I also think it then may pose the question of, "I don't know why. Why don't we do that?"

So you know, do you know the old story about the Mom, she's going and she's cooking Thanksgiving dinner, and she cuts the ham in half, and she puts one in the top oven, one in the bottom oven, and their daughter says, "Why do you do that, Mom?", and she says, "I don't know. That's how Grandma does it. Go ask Grandma why she does that." [She goes] to Grandma, and Grandma says, "I don't know, that's how great-Grandma did it." She goes to great-Grandma, Grandma says, "My oven wasn't big enough, so I had to cut it in half."

Dion Hinchcliffe: Yup.

Tim Baum: You know. So, are we doing things like that, that we shouldn't be doing, that we can say, "Why are we doing business today? What is our process?" I think that is one of the great things, again, that keeps Hartford Mutual great, but it is. But, what is ... We're going through this core system replacement. What are some of the processes that we've been doing for thirty years, when technology first entered the realm within Harford Mutual? And why are we doing it that way? Do we need it? Do we need to print out paper? Does someone need to see a printed report in order to do their job? Or can actually have the system reconciled with stuff, and just say, "You know what, it reconciled. If you want to look at it, you can go to this file and look at it, but I'm telling you, I already put everything in the Excel file and did a match, and they all match, so you're good to go!" And how can I then make that accounting group more efficient, by taking some of that menial work out there, going and checking numbers by numbers down reports, so they can actually be doing better things?

How do we do our billing better? How do we do ... You know, how do we process our payments? We want to pay what we owe people from when they make a claim. How can we do that faster? How can we do that more efficient? We don't want to pay them more than we want to, but how do you become more efficient? And I think that part of a continual improvement. So, it is ...

Dion Hinchcliffe: It sounds like to me, like you have a lot of the Chief Digital Officer role already there. So that really helps from a perspective of the seat I think that technology leaders need to have at the table, and since you guys [...] for having that. So let's talk about how are you accomplishing this system replacement so quickly, given how much legacy that you have. And there's been a lot of discussion of how can you make regulated companies comply using things that public cloud would allow you to move much faster. Dealing with the plumbing, that really all the capacity don't use except for peak periods, all the other upgrades having stuff to grow with having a big foundation that you run your software and your datacenter and you go into public cloud; has that been part of your strategy, or are you doing something different in order to accelerate your digital transformation?

Tim Baum: So, it's funny. I was talking with an individual from another carrier just probably about a month ago about this. And, I might be showing my age, because I still like to ... So we currently host ... We don't host our datacenter. We have our datacenter actually in our office. And while, to me, that's maybe I'm young enough to know why do I want to have a datacenter in my office when there are hosting centers out there that's going to have the security, the fire retardant, you know all those backup generators in there, I don't want to worry about that. But to go to the cloud, I can't put my hands around it, I can't touch it, and so I want to feel like I still have a box there that I know is my box, that I can actually physically get out there and touch. Now we'll all come around in five years where I don't really need that box, because all our boxes are virtual, so even I can touch the box, I can't touch the server because it's a virtual server.

Dion Hinchcliffe: Right.

Tim Baum: The concept of knowing that there's actually a physical piece of hardware out there that I own, and I'm not going to Platform as a Service. You know, something within the cloud.

I think the other thing with the cloud is because are regulated, it opens a whole other, from a security standpoint. When I start putting stuff on ...

Dion Hinchcliffe: It's a whole other bar, and ...

Tim Baum: Right.

Dion Hinchcliffe: ... hearing it from CIOs in the UK, in banking they're like, "No way cloud is happening soon. Of course, eventually, but not for us. We need to control," right?

Tim Baum: Right.

Dion Hinchcliffe: So, it's interesting because I think, you know, by the end of this year, more than half of all enterprise workloads will be on public cloud, so that just shows you how fast it's happening [...].

Tim Baum: Right.

Dion Hinchcliffe: So, it's very interesting. So, we're almost through our very interesting session here, Tim, and thanks for all the insights. You’ve been very useful. I'd like to kind of tap into the store of wisdom that you've built up as an IT leader, in terms of: You've been there four months but it sounds like you've got things really, you know, underway in terms of large changes to the technology infrastructure. What practical advice would you give to CIOs and those who want to be CIOs for undertaking a program of digital transformation in the insurance industry? Given what you're learned already, what might you do a little differently, what advice, what hurdles did you get over that you wish someone had told you?

Tim Baum: So I think the one is to realize that you're in a regulated environment, so there are things that maybe you want to do, that you can't do because of the regulations, because of the security aspect. I think the other thing, you know, you've got to really be sensitive to is in security. So Dion, we briefly talked about Internet of Things. From a security standpoint, is that going to be an entry-point for individuals to come in? While you have that feature within your hot water heater, can someone tunnel in through that to get into my systems, in order to break? So there's a security aspect to it.

On the digital side, realizing that today is not yesterday, and today won't be tomorrow. So things are going to continually change. So how do you keep the business drivers? How do you keep the business informed of what it's going to take to do certain things, so that they can make better decisions of where they want to take the business from a profitability standpoint, from a growth perspective. And so, it's understanding that relationship that I have to have the data as soon as possible from a systems perspective of what it's going to take to formulate and develop these systems so that the business can make decisions of where they want to go. I think that's one key is really being able to try and figure out how to garner that information faster, and how reliable that you can then plan of where you want to take the business. I think that's important.

I think then it's also is really slicing up, from the digital aspect of what are you going to focus on? You can't focus on everything. You might feel like you have to focus on everything, but you're going to have to slice things up, compartmentalize things, attack it, prioritize those things. Everything's important, but you're only going to have one top priority. It's a top priority because it's your first priority. It's not your second priority, it's your first priority. Now, if you can go parallel, that's great, but ultimately, there is only one first priority, and that's the first one.

So it's understanding that there might be a need and demand to do stuff, but trying to prioritize what that need and the demand is, and then it's looking at the technology options out there. How do you solution something? How also, I think from a digital standpoint, realize that the technology is going to change in three years, what we see now is going to, I believe, will be just magnitudinally larger, and a change than what we have now. So don't build something for ten years out, because in ten years, totally different game. So, how do you incrementally move forward, and being able to move forward so that you can then alter, you can then turn, you can then drive to a different direction as needed. So that whole nimble concept. I think it's keeping those things in mind that are going to be the most important thing of, you know what's the technology, what's the human assets that you have that you can actually do things, and then, how do you actually execute that, and piecing things, and figuring out what your priority is.

Dion Hinchcliffe: Yup. Now I think, and this is what we see in leadership survey after leadership survey that too many top priorities are what seem to choke things, so I think your advice certainly resonates with those industry perspectives. And, I think not [...] the oceans, you know. How many strategic initiatives have failed to try to do everything at once? So I think that incremental, that fast, incremental approach is something that I'm certainly seeing is more something we're doing, the industry is doing more broadly as a smart thing to do.

And so great! Well, this brings us to the end of an absolutely terrific conversation. Tim, I can't thank you for coming on more. This is my first CXOTalk for 2017, and we're hoping to explore the industry, the insurance industry a couple more times before we're done. So, please, everyone give a thank you to Tim Baum, CIO of Harford Insurance. Don't forget to stop by CXOTalk.com. We have over 200 other episodes of IT leaders, such as Tim. And thank you very much for appearing, and thank you for watching the show.

Tim Baum: My pleasure, Dion. Good talking to you.

Artificial Intelligence and Public Policy

Dr. David A. Bray, Chief Information Officer, Federal Communications Commission
Dr. David Bray
CIO
Federal Communications Commission
Dr. Timothy Persons, Chief Scientist, GAO
Dr. Timothy Persons
Chief Scientist
U.S. Government Accountability Office
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Will A.I. make our government smarter and more responsive – or is that the last step towards the end of privacy? As chief scientist of U.S. Government Accountability Office, Tim Persons conceives its vision for advanced data analytics. Learn about the promise and challenges around government A.I. and what those portend for private sector companies.

Dr. David A. Bray began work in public service at age 15, later serving in the private sector before returning as IT Chief for the CDC’s Bioterrorism Preparedness and Response Program during 9/11; volunteering to deploy to Afghanistan to “think differently” on military and humanitarian issues; and serving as a Senior Executive advocating for increased information interoperability, cybersecurity, and civil liberty protections. He completed a PhD in from Emory University’s business school and two post-docs at MIT and Harvard. He serves as a Visiting Executive In-Residence at Harvard University, a member of the Council on Foreign Relations, and a Visiting Associate at the University of Oxford. He has received both the Arthur S, Flemming Award and Roger W. Jones Award for Executive Leadership. In 2016, Business Insider named him one of the top “24 Americans Who Are Changing the World”.

Dr. Timothy M. Persons is a member of the Senior Executive Service of the U.S. federal government and was appointed the Chief Scientist of the United States Government Accountability Office (GAO) in 2008. In addition to establishing the vision for advanced data analytic activities at GAO, he also serves to direct GAO’s Center for Science, Technology, and Engineering (CSTE), a group of highly specialized scientists, engineers, and operations research staff. In these roles he directs science and technology (S&T) studies and is an expert advisor and chief consultant to the GAO, Congress, and other federal agencies and government programs on cutting-edge S&T, key highly-specialized complex systems, engineering policies and best practices, and original research studies in the fields of engineering, computer, and the physical and biological sciences to ensure strategic and effective use of S&T in the federal sector.

Download Podcast

Artificial Intelligence and Public Policy

Michael Krigsman: Welcome to Episode #216 of CxOTalk. I'm Michael Krigsman, I'm an industry analyst and the host of CxOTalk, where we bring truly amazing people together to talk about issues like the one we're talking about today, which is the role of AI and the impact on public policy; or maybe I should say, the impact of public policy on AI. Our guest today, we have two guests actually, are Tim Persons, who is the Chief Scientist of the General Accountability Office of the United States Government, and David Bray, who has been on CxOTalk many times, the Chief Information Officer of the Federal Communications Commission.

And David, let’s start with you. Maybe, just introduce yourself briefly.

David Bray: Sure! Thanks for having me again, Michael. So, as you mentioned, I’m the CIO at the FCC, which means I try to tackle the thorny IT issues we have, internally as well as with our stakeholders, and work across the 18 different bureaus and offices, and right now, the three commissioners that we have that are from both parties.

Michael Krigsman: And, Tim Persons, you’re the Chief Scientist of the GAO. Tell us what the GAO is and what it does, and what you do there?

Tim Persons: That's right, Michael. Thank you. Thanks for having me on, it's great to be at this venue and welcome everyone. I'm Tim Persons, I'm the Chief Scientist of the GAO, and I'm here to essentially support Congress in any of the various STEM-like issues that face the Congress. GAO is one of the few congressional agencies. We actually changed our name in 2004 from "General Accounting Office" to the "Government Accountability Office," and that was a subtle change, but an important one, to be able to reflect the broad remit we have and the focus on accountability, which includes financial accounting that has been our bread and butter. But we now do a lot of performance auditing and analyses on things like Return on Investment, pro-bono evaluation, and things like that, for both Senate and House, being we work for 100% of the House and the Senate committees, and anywhere between 75 and 85% of the subcommittees. So, a broad remit indeed, and I do soup-to-nuts science in that domain, including data science and other issues.

The importance of GAO is just that we're the oversight, insight, and foresight analytic arm of the US Congress. And so, in that regard, we do that ongoing, day-to-day oversight. If any of you are familiar with or like to watch C-Span and various venues, there might be hearings of a panel on this-or-that, and you normally see [...] witnesses from the federal agencies. And, our job is to help support that oversight, but also, more importantly, how to do things, how to achieve better government. That is the endsight piece that we work with, as well as the foresight, which is things to come and the implications there. And so, in that regard, I even lead a small group of scientists and engineers...

Michael Krigsman: Fantastic.

Tim Persons: … who do a lot of that sort of thing to support these broad studies that Congress needs to hear about.

Michael Krigsman: So, you know, I think many people may not have heard of the GAO, the Government Accountability Office, and when I used to study, there was a period I was studying and writing very extensively about IT failures, and the quality of the research and the oversight that was put out by the GAO was just simply excellent. So, it's worth looking at the GAO website, because it's an important part of the government in its oversight capability and mandate.

Tim, why is the GAO interested in AI, and the implications for public policy?

Tim Persons: Right. Great question, Michael. And as you mentioned, all of our studies are on gao.gov. So, AI is an emerging, and emergent technology. It has very disruptive implications, and most of you all know that's a business term, and the idea of "disruptive" changes the way we think and do things. And, the U.S. government, for all of its challenges in certain areas, also is a leading purveyor of innovation and sponsor of these sort of things. You could think of the great advances that NASA brought about, for example, or things out of the services, the armed services and so on, and many other things the U.S. does to help sponsor and promote innovation; and AI has been one of them ever since the concept came up in a workshop in Dartmouth in 1956.

So, the IT has been around, but the U.S. government has been a primary investor in it, even though we now see a lot of private industry and money is going into things now to solve problems. It's because of the profound implications brought about by AI, and the need to help the Congress work in a more proactive manner rather than a reactive manner. I typically like to say that most technologies oftentimes have a scary initial feel to them, oftentimes driven by the science fiction or the fun narrative of things. And AI is no different than that. Most of the public that you think about think about AI in a negative context like Skynet on the Terminator series or things like that.

But, there is a lot of "The Art of the Possible" and a lot of promise and potential in this as well. And so, I see it as my job to discuss the opportunities and challenges as well as the policy implications, and AI is a perfect time and a perfect place to do that.

Michael Krigsman: And David, you’re also keenly interested in the policy aspects of AI, so maybe tell us about that interest.

David Bray: Sure! So, at the FCC, when I arrived in August of 2013, we had 207 different IT systems all on-premise, consuming more than 85% of our budget. And if you looked at where the world was going, with the Internet of Things, with machine learning, and yes, with AI, that just was not tenable. And so, in less than two years, we moved everything to public cloud and a commercial service provider, which as a result, has reduced our spending [on data] from 85% of our budgeting systems to less than 50% on a fixed budget. But more importantly, we reduced the time it takes to roll out new services, and new prototypes of offerings to the public that the FCC does, from being 6-7 months, to if you come to us with new requirements now, we can have something in less than 48 hours.

Now, I say that cloud computing is the appetizer for the main course, which is beginning to make sense of all the data that the internet of things would be collecting. The only way you can do that is with a combination of machine learning, and what some call AI and what we're getting out there as well. We've got to have a way of dealing with the tsunami of data that's going to be coming in and be the trusted broker between the public, as well as public-private partnerships so that as a nation and as a world, we can move forward. What experiments can we begin to do that show its benefit to making public service more responsive, more adaptive, and more agile in our rapidly changing world?

Michael Krigsman: Tim, what do you think about this notion of experiments with AI to show what is possible and the benefit that it can bring?

Tim Persons: Yeah, great question! I think it's a without which, nothing. I think if you don't have a sort of experimental ... and I'm an engineer and scientist by training. So if you don't have this experimental, "Let's build safe spaces," as I'll call them; mechanisms to [...] the technology and do these things, as is happening in various areas and elements of AI, then I think you just can't proceed forward. I don't see where you could possibly innovate without the ability to safely fail, learn quickly, iterate, recycle, and move forward.

Michael Krigsman: You know, it’s funny you talk about that, and one thinks about these things, “failing fast,” I don’t like that term, but “fail safely,” “experiment,” “iterate rapidly,” one thinks about that as being in the private sector. Does the government have the ability to be agile in this way?

David Bray: So, I would say that takes from leadership. Whether it's a good Chief Scientist or a good Chief Information Officer, I think our job is to make the case to the secretaries or the heads of our agencies as to, "Yes, we need to keep heading with the trains running on time for these things." But, if we only keep the trains running on time and we don't innovate, you'll get to where I got into a situation at the FCC where they had everything on-premise, their IT on average was more than ten years old, and they had fallen behind. And so, the private sector knows this because if they don't keep abreast of what's going on in the private sector the same, and agile, and nimble, they fall behind and eventually go bankrupt.

I think the same thing is true in the public sector, which is if we don’t, 1) We’ve got to keep the trains running on time, but then 2) Doing experiments to deliver services differently and better, then we will fall behind. And so, the art of a good C-suite officer to their secretary and their head-of-agency, is to make the case as to, “Here are the things that we’re going to deliver, here are the things that we’re going to try and pivot and learn from, and I’ll be the human flight check if you can move that forward.”

And I think that's true for any organization. That's part of the job that Tim does, that's part of my job at the FCC… Other CIOs are out there; you don't often hear from them. But, they are trying to do delivery with results differently, and better to their leadership. And, it's especially key right now because we have a hiring craze from most of our employees, and the only we way we can deliver results differently and better, is if we figure out ways to make people more productive, and that gets to machine learning and AI.

Michael Krigsman: So Tim, any thoughts on this?

Tim Persons: Yeah! You know, I was going to say … I think David said it very well. I think it does take that key leadership. I mean, people don't get elected and appointed in DC by saying, "I'm going to fail on this sort of stuff." You know, no one likes to say, I mean naturally, we don't like to do that, but that is the way innovation comes about regarding, "Let's try this. Okay, that doesn't work out. Let's try that." You always try and make best efforts on that. It does not intend to, you know, make a colossal mess of things, but I think there's a reason that for all of our innovative, high-risk agencies that have shown success over decades of various [...] technologies; I grew up, for example, in the era of the space shuttle, and that used to be very cool and innovative. But, that took a lot of testing by the NASA enterprise around the country in all the various centers. It wasn't like you threw a bunch of things on the launchpad, and then hit the "launch" button with people inside of that. We've obviously had painful national tragedies with that as well, even with best efforts, but that is where the incredible amount of innovation and advances that we can [...] … Just picking on the space program, I'm not even going into the weapons programs or the other civilian-side things, and things like, for example, what David's doing.

Michael Krigsman: We have a question from Twitter, and Arsalan Khan is kind of getting to the heart of the matter, and he wants to know, "What can we use AI for? For example, can you use it to assess government contractor proposals?" So, where are we, regarding a practical use of AI?

David Bray: Great question! One that I’ve been trying to beat the drum on. There actually is already, right now, not in government, it’s actually a public competition to see if anyone can write a machine learning algorithm that will evaluate real state law as well as a real state lawyer. And so, that’s about 75% accurate at the moment. As we know, California already is using machine learning to set bail decisions, and that’s interesting because we can identify biases in historical bail decisions, but can also weed out things that should not matter to your bail hearing like your height, your weight, your gender, your race.

There already is a success, for example, in using machine learning to grade papers at the third-grade level, so find the same sentence mistakes and grammar mistakes… And so, yes. I think, can we have faster acquisition; because now, it’s sort of complementing the human that is reading through these very long contracts and making sure there are actually legally-approved, and they can be used.

I'd also love to actually see AI actually be used to try and identify ... Where can you identify the most effective employees in the workplace as well as those that are maybe being underutilized and can be used better? I'll defer now to Tim, because part of what makes GAO so wonderful, is that they do both accountability, as well as experiments.

Tim Persons: Right. Thanks, David. And we just did, just to piggyback on that, we just issued; and, I'm just showing a little bit for the camera; but, this is a report, it's fully downloadable on our website, gao.gov; you can just use Google or your favorite search engine, GAO 16-659SP. Anyway, it's our strategic study we did on data and analytics. And I was just talking about data analytics and innovation, what's coming out of this. And, David and I think of these terms in terms of categorizing the advances of data and analytics, and as it moves towards AI; and really the overall datafication of the US federal government. And, starting now, there actually is a law. It's called the Digital Accountability and Transparency Act, or the DATA Act - to those of you who aren't in the know, DC likes to come up with clever acronyms that embody the issue of stuff; and so, this is one of them. And, the DATA Act is really just saying, "Look. Federal agencies and departments, you are required to publish your spend data out in a standardized manner that you can now have data analytics coming up."

But, these are the initial steps that are necessary for the algorithms to be able to not only collate the data but then start to do the intelligent work on it that David was referring to. I mean, right now we're at spend, but exactly what he was saying about HR data, things like, "How do we more critically identify our problems, and have really a more empowered management approach to various federal agencies?"

So, just the day-to-day administration of the government, I think big changes are coming. I'm excited about those sort of things, but obviously, there's a lot of control issues. There are indeed technical issues, and there certainly are policy issues going on.

Michael Krigsman: And, what are the policy issues that come along with all of this?

Tim Persons: So, yeah. Go ahead, Dave. Do you want to take that first?

David Bray: Well, I'm just like, "Where do you even begin?" I think it is … It's actually … I think the "P" in policy is more for "people and workforce." You have to remember, if you go back to 1788, and James Madison wrote in the Federalist Papers #51. He said he wanted ambition to counter ambition. And the reason why is, "what is the government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary.” So the system of checks and balances that prevents any one person from having too much influence too quickly across the large public service enterprise.

The challenge with that is AI does cut across the enterprise. It is transformative. And so, we have this system of checks and balances that I think are good, it's what actually keeps our nation moving forward as a republic. And at the same time, you have this exponential change being brought on by data; through the Internet of Things, through AI, etc. And so the question is: How do you take an organization that was intentionally designed to have checks and balances, and have it move forward with speed, in a way that does bring people on? I think it's also the question of, most of the workforce of public service, and I don't believe this is the case of Tim or myself, or even 20% of the people I know in public service.

The premise was: You come in, you move things forward incrementally, you keep the boat afloat regardless of who’s president, and that’s your proposition. Now, what we’re asking him to do is something that’s game-changing, that’s much more like the private sector, except we don’t have an IPO and we don’t have the same financial incentives of if you do a really good job, you can do your initial public offering.

And so, how do we motivate employees that are in a workforce that was hired for one reason, that is to keep the nation moving forward and encourage them just to keep the nation moving forward, but now think completely out of the box and be transformative.

Tim Persons: Yeah. And I would just add that on the policy side of things, part of the era of big data, and data analytics is challenged by how powerful it actually is. There are studies like at MIT at Cambridge, at UC Riverside, and so on, all showing that just with sparse information, probably out there on places like Facebook, four or five "likes," you can profile a person without knowing anything about them with very high fidelity on various things. So actually, it's almost too powerful in one sense, and so, it does invoke this issue of how do you mitigate against the PII risk, you know? The personally identifiable information where you can resolve individual citizens. We are a constitutional republic. That means we care about individual civil liberties and privacy rights, and so on, and so that's one of the big issues. It's going to have to be dealt with moving forward.

On the cultural side, I think David put his finger on some key things, which is just we have to think totally differently here in the public sector. I would assert it applies to the private sector as well. But, just the idea of thinking algorithmically about things that we normally have taken for granted. And AI makes it so we have to think as a computer does, even though we want to train it to do something that, you know, and David and my decades of life, we have a lot of inherent knowledge that we didn't have to sort of program in. We picked it up over time. But now, there are opportunities to think about this: What are these things that tend toward helping with greater efficiency and success, and yet, still [doesn't] violate constitutional principles?

Michael Krigsman: So, it seems to me you're raising two issues here. One is the issue of the role of public policy regarding supporting AI, or conversely, it can inhibit the use of AI, the expansion of AI; and number two, is the cultural dimension: How do we learn to think algorithmically? So, how do we change our thinking patterns to take advantage of these new technologies?

David Bray: I think that hits the nail on the head, Michael, that I try to use the words "public service" as opposed to "government" nowadays because the time that it takes to send information between Topeka Kansas and Washington DC is no longer four days on horseback, it's now milliseconds. And so, the way we used to do things, we had to account for communication maybe being slow or delayed, and coordination being difficult. That's no longer the case, and so maybe, there are things that we can involve the public, and the public can do directly, without requiring government professionals. Maybe there are things we can do as public-private partnerships, where parts of the private sector are thinking beyond just our own individual bottom line, but are also thinking about local or national impacts.

And so, the last thing we want to do when we move to these technologies, whether it be the cloud, the Internet of Things, or machine learning in AI, is to take the old way of doing things and just replicate it there. We're actually talking about wholesale experiments on how do we deliver results differently and better, given these new technologies and what they create as being possible.

Michael Krigsman: And Tim, your thoughts on that?

Tim Persons: Yeah, I think that it absolutely is the policy will often evolve after the technology does if it just comes upon. And I think people are rightly concerned about the, well, let's just not knee jerk and regulate on something, and sort of kill the innovation in the cradle, so to speak. And so, I think there's optimism, though. There's good news: We have a view right now in Washington where there's actually … It's more open on this particular issue. It's being led by some of the more near-term innovations of things; I'm thinking in particular about autonomous vehicles; our department of transportation, when you talk about our National Highway Traffic Safety Administration, or NHTSA, as we call it in DC.

Is there a safety regulatory body? And yet, they're being proactive with the Wamos, and the Ubers, and even the various car manufacturers on how can we get this right and how can we test this, as well as; how do we do this so that we're not just issuing a rule that comes out and effectively kills US competitiveness on that? And so, you know, I'm not Pollyannish about this, but I do think there's a posture of recognition that we need to allow for some managed risk in this innovative process, without killing any ideas, and yet trying to be as safe as possible.

Michael Krigsman: Who is responsible for striking this balance? How does it get done? And again, we're talking about AI, and we want to create an environment that fosters AI. And yet, at the same time, people have concerns and want to have certain types of controls in place; and so, that balance, and correct me if I'm wrong, is essentially the province of government policy.

Tim Persons: Right. And in this case I mean to speak to, it's one of these things our government has set up to diffuse power and to have various elements take care of their relevant mission spaces, okay? So let me say it a different way. It's going to devolve to the departments and agencies, so it's going to be context-dependent. Department of Defense is going to care about AI regarding warfare, and what's allowable concerning engaging in warfare, and there's no appetite to just turn over your machine to go and do things so that it's not just doing national security things willy-nilly.

On the other hand, you may look over on the healthcare side of things, Health and Human Services is going to want to regulate. And they care about the health information privacy laws, or the HIPAA act, which is how your and my personal information, our medical information is kept private. And yet, we still may want to be able to utilize these tools to aggregate data, come up with quicker, better, faster, cheaper diagnostics and treatment options for whatever maladies that may come our way. And so, you're going to see evolution in the various departments based upon their particular mission.

Michael Krigsman: David, you look like you’re nodding in furious agreement.

David Bray: I’m in great agreement, and it’s probably best that Tim answers that one, since he’s at the GAO, so…

Michael Krigsman: [Laughter]

Tim Persons: Well, I mean, I think even FCC, right? You want to manage various issues and things like that. I think AI, for FCC, it’s a customized-type thing. There’s not a generalizable AI, where we’re going to say, “Here’s this thing,” and it’s going to apply across the board. These things are going to be highly sophisticated and contextualized in whatever we’re asking them to do.

David Bray: Agreed. I think that’s key to our republic. Our republic, as Tim said so eloquently, does aim to defuse power to the specific missions of the departments and agencies so they know context best. And so, what I would say, with looking at experiments from machine learning and AI is context, context, context.

Tim Persons: Right.

Michael Krigsman: Again, I keep coming back to this point: What are the government's role and that of the public, because we're talking about public policy?

And let me also mention that you are watching Episode #216 of CxOTalk, and we're talking about AI, artificial intelligence, and public policy. We're speaking with David Bray, who is the Chief Information Officer for the Federal Communications Commission, and Tim Persons, who is the Chief Scientist for the General Accountability Office of the government; which, by the way, does amazingly excellent work, analysis, and research, if you're not familiar with it.

And right now, there is a tweet chat happening with the hashtag #cxotalk. So, please join us on Twitter, and you can ask a question as well.

So, getting back to this issue of the role of government and policy, where are we today? What's the status of policy and AI, and where should the policy domain be going, on AI?

Tim Persons: So, let me just talk briefly about the government role, because this is in some sense, speaking historically, there's the "What has been," and "What is now," and "What could be moving forward." There's always been a general agreement ever since the post-WWII; Vannevar Bush you know, science, in the interests of society memo that he put out, which is really profound regarding establishing the National Science Foundation, and several of our basic research enterprise elements as we know it today. Van Bush, when he was writing about that, was really just saying, "You're investing early-stage science," some might call it "a thousand flowers bloom," you sort of just sprinkle seeds of ideas, relatively low money; although aggregated it could be large money. But, you try things out with our universities and our basic labs and things. Don't we have a great innovation to do that?

Absolutely no controversy, really. That’s bipartisan-supported - the idea of doing that. And it takes a lot of the risk out of just expecting the private sector alone to just sort of explore those sort of things when there’s a high degree of failure in those sort of issues.

Moving forward, though, the key thing oftentimes gets into the, well, creating what I would call an “infrastructure for innovation” so that if entities want to try and develop, how do they de-risk things as they look to scale in manufacturing and other particular areas? And so, the government, where it’s debated about the extent to which the government projects better, or relies only on the private sector. But there are things like in the manufacturing-innovation side, like the National Network for Manufacturing Innovation, as an example of trying to bridge that gap in manufacturing innovation.

And then, when you look to sort of where it’s operational, then that’s where the government comes to regulatory rule-making. So, you’re going to have that there. We want, if it’s competing in a marketplace, you want it to be an even marketplace, or a level playing field. If it’s operating safely, like I mentioned NHTSA earlier, you want to have safe operations so that autonomous cars aren’t, you know, running over living things or doing bad stuff, and crashing and all of that. And so, those are key things the government has. But other than that, you want to be able to create the innovative environment for the economy to move forward, create jobs, and allow for growth.

Michael Krigsman: We have a very interesting question from Wayne Anderson, and he directs this to Tim Persons, who is the Chief Scientist of the General Accountability Office. It’s a hard question. He asks, “In a world where AI innovation may not succeed, how do you define ‘investment efficiency’?”

Tim Persons: Yeah. Great question, Wayne. The answer is … Oftentimes what's happened historically is when you invest in this, and AI has had - I mentioned this in a [...] workshop, but we're talking about decades of, and likely billions of dollars put into basic research across the various elements, whether it's medical, basic research at NIH, or whether it's DARPA at defence, or NSF, and so on. It is a good question about, "How often, or how long do we put money into that, and when do we declare defeat, and maybe do something else?"

The short answer is that there is no macro, overarching center of authority who sort of determines that. The closest thing in the executive branch is the Office of Science and Technology Policy, whose previous director, Dr. John Holdren, who was appointed by former President Obama; and he's there to often coordinate and facilitate, but oftentimes not dictate and tell, for example, the Department of Energy what they may, or may not do in their research portfolio; or the NSF, or different things. He's very influential, or he was. However, that's not the same thing, again, as this top-down. It's usually more diffuse and left to the different agencies to do.

So, stopping and starting is, again, another one of those contextualized things. There is no central authority on all of these issues. I think the good news for AI is that I've mentioned the decades and billions. I think we have and will continue to see, innovation and fruit come out of that, and I think that's cause for cautious optimism in terms of the various things moving forward.

So, I think the key question is when should the government start funding something, assuming private industry has already picked it up? And, that’s indeed a greatly debated question that happens in the relevant committees on the Hill.

David Bray: And I'd like to add to that. There's a historical analog. If folks are not familiar, they should look. There was a Project Corona, which was a satellite effort in the late 1950's. And so, this was before we ever had a rocket go to the moon. Basically, ARPA at the time, as well as the Department of Defence and the intelligence community, was trying to launch a satellite that would be able to take photos of Earth. And, that effort had thirteen rocket explosions before they ever even got something up there. And you can imagine nowadays, however, would we be willing to tolerate thirteen rocket explosions before we finally got it? Because, obviously, it paid off; and now, could we imagine living without Google Maps? And in fact, the early predecessor to Google Maps, the imagery it was using was actually from declassified Corona images. And so, this is one of those things where …

You know, how does Elon Musk decide where he's going to focus? He's probably going with a combination of analytics, but ultimately his intuition and his gut. I think the same thing is true with public service, except it is many different people's intuitions and guts, as opposed to one person, and thus a distributed nature. But like Tim said, AI has been through probably about three waves, and we'll probably see another wave after this, and each time, there are going to be things that it will have, that maybe are equivalent to thirteen rocket explosions before they finally pay off.

Michael Krigsman: So we've identified at least two dimensions of policy, it seems to me, during this conversation. Number one is the economic investment policy, given the fact that it may not succeed, but it does hold a great deal of promise. We're talking about AI, but this could be true of any advanced technology such as flights to the moon, as David was just alluding to. And then the second is the role of government policy in terms of regulating AI, or creating a legal and regulatory environment that either supports the development of AI and its proliferation or inhibits it. Is that a correct statement of the two dimensions of policy that we've spoken about?

David Bray: So, this is where I will switch my hats and put on my Eisenhower Fellow to Australia and Taiwan [hat], and I’d say “yes” on that second part talking about what one might be able to do with rule-making. Both Taiwan and Australia are recognizing that with new technologies like the Internet of Things and AI, traditional notions of rulemaking may not be able to keep up with the speed.

And so, personally, I don’t have any answers, and I’d be interested in Tim’s thoughts. We may need to do experiments, in fact, on how do you even keep up with the speed of these technology changes, because the old way that was done may not be sufficient.

Tim Persons: Sure, I agree with that. I think there’s going to need to be just innovation in the rulemaking process. A lot of times, it’s deemed to be quite slow in things now, but it’s just because of the federal laws that have been layered over decades of policymaking that make it so, right? There are ways, I think, to garner public inputs, perhaps in this day and age, far more efficiently and effectively than traditional ways that we’ve done. But, the relevant agencies have to get there.

I also like to say, there are certainly, again, clear and legitimate concerns about regulation stifling innovation. But there’s often the case that it’s not thought of. Sometimes, well-thought-out or contemplated regulation can help spur innovation, in terms of, “Look. We know you ought not to do this, so here; let’s design in this particular way to make this system work in this way.” And I think some of the more creative activities I’ve seen are coming from that positive angle as well, not just the “cut all regulation out!” Because, at the end of the day, I don’t think anybody will want zero regulation and it’s completely and utterly a Wild West. At least, I don’t want to ride in an autonomous vehicle, for example, in that context. But, I think that there’s a way to find out what’s that baseline way of doing things, and then supporting efficient solutions to do that, and we’re going to learn. You cannot eviscerate risk all the way up-front in any enterprise. Period.

Michael Krigsman: We have another interesting question from Chris Petersen, who’s asking, “What are the mechanisms or pathways to gaining collaboration across agencies?” And given the fact that you’ve just been describing the context that each agency has its own needs, it seems to me that that will have a tendency to lead towards siloing and duplicative efforts. And so, what are the pathways for collaboration on innovation?

David Bray: So, I would say that, in some respects, you hit the nail on the head, that the Founders originally wanted siloing, and the source of it prevented any one person from having too much influence. But I think that is the challenge we face. These issues with Internet of Things, machine learning and AI, need to cut across, and in fact, do cut across domains. And so, interestingly enough, I'm going to put [this] forward and I'm interested in Tim's thoughts. I think it's easier for agencies to partner with the public sector than it is for them to partner with themselves, partly because there's what we call the "color of money," the funding money. You get into some very tricky rules and legislations that if I used my money in partnership with another agency's money … This is actually when GAO sometimes get called in and is actually trying to account for the funds, and so, I would actually put forward the more interesting model we need to think about.

This is: do we need to look at innovative public-private partnerships that maybe have different agencies contributing to it, but the center of gravity is what the private sector and the different agencies are being brought together and being convened there, as opposed to trying to do something that’s just inter-agency in nature. So, I’ll be interested in Tim’s thoughts.

Tim Persons: Yeah. I know, I totally agree, David. I think that the last president spoke heavily about public-private partnerships. That means a lot to a lot of people; so that itself needs to be thought out in terms of what that means, but there is the art of the possible. Those things have gone on, and I agree. I think sometimes it's easier to connect, I guess, externally with other entities, even private entities, and build those collaborative networks more so than among the federal sector now. Not all hope is lost. There are times when there are formal coordinating bodies set up, either by statute or policy from the White House. There also are informal things, and they also seem very effective, meaning, among the federal entities.

I will speak personally. I participate in the Chief Data Officer-like community. Just in terms of doing things, we just had, in the last administration, a federal-wide Chief Data Officer, and he was excellent and really did a lot to evangelize the idea of data and analytics and what it means, and very powerful indeed. Unfortunately, just the way that we stovepipe things at our agency, the way the budget’s run, the behaviors incentivized, we often are limited in terms of our ability to do that collaborative piece. And there is always an element of, “I needed to do my day job, but then I also needed to coordinate and collaborate. How do I recognize when to do that and build the partnerships to get things done, especially in today’s 21st century, complex, adaptive systems challenges-like issues?”

Michael Krigsman: Very quickly, because we have about just over five minutes left, and there’s another topic that I want to talk about as well. But, very quickly, would either one of you like to offer your prescriptive advice to policymakers regarding how they, and we should be thinking about the role of public policy and AI? Would either one of you like to take that one?

David Bray: So…

Tim Persons: [...]

Michael Krigsman: Okay.

Time Persons: So, I'll do this. I'm going to say; I'm not going to offer a prescriptive [solution]. I don't think we're prepared to do that. I mean, we personally … I just mentioned our data analytics study. We're kicking an AI study just because of the importance of this. GAO will officially come out with some concluding observations on this sort of thing in time. So, I'm looking forward to that.

What I will say, though, is that I think the government does have a key role in partnership, I think as David was elegantly talking about - just the idea of the partnerships that we can build, how we solve things in a collaborative, networked, manner; how we focus on problem-solving, and not just what we can, and can't do sort of things. And, I think that we can create the environment where this overall system together can be arranged to maximize our success in innovation, and AI; and minimize the undesirable outcomes.

David Bray: And I will add to that, and say again; I can't do prescriptive. That's actually not what my role is. But I can say, if you look at the successes we've had with the Defence Advanced Research Projects Agency, it might be worth asking, do we need a civilian equivalent that is bringing together these different agencies, but also working with the private sector? Because if we wait for the trickle-down effect of innovations on the defense side with AI and machine learning to be brought into the civilian sector, we're going to be too slow. And so, we may need to have a civilian equivalent of DARPA. And in fact, interestingly enough, there are some agencies that bring in more revenue than they spend, and they can actually be a source of funding it with no additional tax increases or something like that to run the civilian enterprise for advanced research projects in AI.

Michael Krigsman: Okay. So clearly, one of the messages here is, is there a need for a civilian equivalent of DARPA and the role of public-private partnerships in getting things done? But, before we go, we have five minutes left. I would like to shift gears, and talk about the role of the Chief Information Officer in this age of very fluid and changing technology, and very fluid and changing expectations of the CIO. And I know that historically, the CIO, and CIOs in general, and of course there are many exceptions to this, such as David Bray, but CIOs, in general, have gotten this reputation for being the keepers of the word, "no." The default is, you want something done? "No, we can't do that." Can we do this? "No, we can't do that, either. We can't do anything!" And maybe, that's unfair.

So, thoughts, anybody, on the changing role of the CIO today?

David Bray: So I’ll give my real quick [opinion], and then I’ll defer to Tim. I’ll say …

Tim Persons: [Laughter]

David Bray: In the past, CIOs … There are two types of CIOs, I think, nowadays. There are CIOs that still see their jobs as being Chief Infrastructure Officers and just Chief Infrastructure Officers. And, those are the ones that are more likely to say, "no," if it doesn't fit into their infrastructure. But, I think if CIOs are really doing what they need to do to help the organization stay abreast of the tsunami of the Internet of Everything, the large increase in data, machine learning, and AI, they really need to be thinking about a holistic strategy that is defaulting to "yes," and then using a choice architecture strategy to say, "How do we get there in a way that is innovative, manages the risk, and moves the organization forward?"

And so, you can already see this where you see the explosion of Chief Digital Officers, Chief Data Officers; that’s happening because CIOs are not providing enough strategy as an area. And so, we really need the CIOs to stand up and recognize that first and foremost, they should be partnering with their CEO or their head of the organization, for how do they move the organization forward and keep it relevant for the next five years, ten years ahead?

Michael Krigsman: Tim, please. Go ahead.

Tim Persons: Yeah, no. I totally agree with that. I think we have to view this in terms of … Look, these folks have been essentially the Chief Infrastructural Officers for IT in there, and they have to have sort of a fortress mentality of protecting the data and things, given the rise of the hack and all this sort of stuff that’s going on, and going to continue to go on. I do think that from the CDO perspective, the data officer, it’s now turning from looking at where the CIO may see data as a burden as something I’ve got to protect; the more I have, the more I’ve got to protect; the more it costs me, even if I’m in cloud, I’ve got to buy more commodity storage for it or ship it around, or do whatever. It’s changing where the CDO is being brought in to say, “Look, let’s look at that as an asset. As we datify, how do we find optimization? How do we attack decisions that heretofore were ‘relegated to the gut,’ so to speak, and let’s be data and evidence-based in terms of what we’re doing.”

You know, it is a challenging job. I don’t just want to be disrespectful at all, [but] there just has to be some balance brought into that so they’re not falling into the “CI - No” trap, as I think you’ll about around departments and agencies.

David Bray: If I can get in real quick, for thirty seconds: I think it also depends on where the CIO reports. If the CIO is reporting to the Chief Financial Officer, then you're going to get the "no," because they're thinking about it as cost. If they report to the Chief Operating Officer, you're going to get "no," because they're thinking about risks to the enterprise. If you get them reporting to the CEO or the CEO equivalent of their organization, then they're going to be more risk-taking and more innovative. This is because the CEO, at the end of the day, doesn't want the company to ossify, it does not want it to fall behind; and so, it's actually, to whom do you have  [to give] the CIO report?

Michael Krigsman: But, I guess my question here to either one of you is, many CIOs, maybe most CIOs, recognize that they need to be providing a - at least they have it in their mind - the awareness, in theory, that they need to be providing a strategic benefit to the organization. And yet, there's a very big disconnect between that awareness and the execution of that in practice. And so, what advice, or how can we overcome that gap to help CIOs not just think about a partnership with the business, but actually do it in a meaningful way?

Tim Persons: I think David brought up a good point about reporting to the agency head or the CEO equivalent, and so on. I think you have to come from a problem-solving approach. It's the how might we do something, rather than, you know, may we work a permission-based thing. That always matters. Policy and rules are there; there are laws there for a reason. But, we have to say, "Look, here's the problem." And oftentimes, just defining the problem well is in a very important, but often neglects Step One, and then coming up with collaborative solutions which may, of course, invoke reaching out to the private sector, and allowing the CIOs to feel empowered to do that instead of, "What if the risk is becoming too much box-checking," as it were, on, "Okay, I did this; I did this." But that may unintentionally limit you, but [you] make it feel like you're wherever you need to be.

David Bray: And I would just add to that again. It really … I think you'll probably find those that know it, but don't deliver it as much, don't have as strong a connection to their CEO. It's when you're close to the CEO and the CEO's imparting of the things they want to try. And as Tim said, can we be creative problem solvers in the face of the rapidly-changing world? That's when you'll see those CIOs actually be willing to be a "CI-Yes," as opposed to …

Michael Krigsman: Okay. And, on that, this very fast and interesting 45 minutes has drawn to a close.

You have been watching Episode #216 of CxOTalk. We’ve been speaking about AI and public policy, and then a little interlude on the role of the CIO; an interlude at the end, so maybe, it’s not quite an interlude - but sort of an “ending-lude.” And, we’ve been speaking with David Bray, who is the CIO for the Federal Communications Commission, and Tim Persons, who is the Chief Scientist for the General Accountability Office.

Thanks, everybody for watching, and thank you to our guests. We'll see you again next week. Next week, we have a show on Monday and a show on Friday. So, we have two great shows. Bye-bye!

The Global Impact of Artificial Intelligence

Dr. David A. Bray, Chief Information Officer, Federal Communications Commission
Dr. David Bray
CIO
Federal Communications Commission
Darrell West, Center for Technology Innovation, The Brookings Institution
Darrell West
Center for Technology Innovation
The Brookings Institution
Stephanie Wander, Senior Manager, Prize Development
Stephanie Wander
Senior Manager, Prize Development
XPRIZE
Michael Krigsman, Founder, CXOTalk
Michael Krigsman
Industry Analyst
CXOTALK

Artificial intelligence is primed to pervade everyday life, from autonomous vehicles to intelligent ads that anticipate your desires. How will these shifts vary globally, and what do they mean for the future of work, life, and commerce? Two big thinkers share their views: Darrell West, editor in chief of TechTank at the Brookings Institution, and Stephanie Wander, who designs prizes for XPRIZE. Longtime CXOTALK guest, David Bray, joins the conversation.

Dr. David A. Bray began work in public service at age 15, later serving in the private sector before returning as IT Chief for the CDC’s Bioterrorism Preparedness and Response Program during 9/11; volunteering to deploy to Afghanistan to “think differently” on military and humanitarian issues; and serving as a Senior Executive advocating for increased information interoperability, cybersecurity, and civil liberty protections. He completed a PhD in from Emory University’s business school and two post-docs at MIT and Harvard. He serves as a Visiting Executive In-Residence at Harvard University, a member of the Council on Foreign Relations, and a Visiting Associate at the University of Oxford. He has received both the Arthur S, Flemming Award and Roger W. Jones Award for Executive Leadership. In 2016, Business Insider named him one of the top “24 Americans Who Are Changing the World”.

Darrell M. West is vice president and director of Governance Studies and holds the Douglas Dillon Chair. He is founding director of the Center for Technology Innovation at Brookings and Editor-in-Chief of TechTank. His current research focuses on educational technology, health information technology, and mobile technology. Prior to coming to Brookings, West was the John Hazen White Professor of Political Science and Public Policy and Director of the Taubman Center for Public Policy at Brown University.

Stephanie Wander comes to XPRIZE with over 12 years of diverse experience in film production and digital media. Recently, Ms. Wander earned her MBA from the UCLA Anderson School of Management, where she focused on Sustainability, Corporate Social Responsibility, and Non-Profit Management. Ms. Wander uses this background to help design prizes that address the world’s grand challenges.

Download Podcast

The Global Impact of Artificial Intelligence

Michael Krigsman: Live from New York City! No, we’re not in New York. I’m in Boston! [Laughter]

Welcome to CxOTalk! I'm Michael Krigsman, industry analyst, and your host. CxOTalk brings the most interesting, experienced people to have in-depth conversations about issues such as the one we're talking about today: artificial intelligence. We have a wonderful show today, and we'll be speaking with three great guests.

Stephanie Wander is with the XPRIZE. David Bray is an Eisenhower Fellow, an executive-in-residence at Harvard, and is CIO of the FCC. And, Darrell West is from the Brookings Institution.

[...] I want to remind everybody that there is a tweet chat happening right now, using the hashtag #cxotalk. And, I want to give a special thank you to Livestream for supporting us with our video infrastructure and distribution. Livestream is really great, and we love those guys.

So, let’s begin. Stephanie Wander, how are you? Thanks for being here. Tell us about yourself and about XPRIZE.

Stephanie Wander: Thank you so much for having me! XPRIZE is a nonprofit foundation. We are dedicated to changing the world by offering large-scale incentive competitions. So, we offer millions of dollars to teams of innovators to solve the world’s biggest problems. At XPRIZE, I’ve been really privileged to design several prize competitions, including the one we recently launched for the IBM/Watson XPRIZE. And, I’ve been recently changing roles a little bit; I’m going to start working to launch our XPRIZE Institute, which will be a strategic pillar for our organization helping to set forth our vision towards overall abundance.

Michael Krigsman: Okay. David Bray, you have been a guest here on CxOTalk quite a number of times. Welcome back!

David Bray: Thanks for having me, Michael! So, you already mentioned my role as Chief Information Officer at the Federal Communications Commission. It’s a nonpartisan role, working across the eighteen different bureaus and offices of the FCC. Our scope is everything wired and wireless in the United States.

My other hat is that of an Eisenhower Fellow to Taiwan and Australia, which meant in 2015, I got to meet with both their public sector and private sector leaders in each country, and as what their plans were for the Internet of Things. And that battle continues now. Obviously, with the Internet of Things, the only way you’re going to make sense of all that data is with machine learning and AI.

Michael Krigsman: Fantastic! And, Darrell West. This is your first time, and welcome to CxOTalk!

Darrell West: Thank you, Michael! It’s nice to be with you.

So, I direct the Brookings Center for Technology Innovation, so we’re interested in all things digital. We’re especially interested in the legal policy and regulatory aspects of technology; how all the various innovations that we’re seeing affect people, the impact on society and the economy. Some of our work is based in the United States, but some of it is global in nature.

Michael Krigsman: So, Darrell, let’s kick off the conversation, and would you share your view of digital disruption with us, and the changes that we’re seeing around us?

Darrell West: Well, we are living in such an extraordinary time period, just because the pace of change is amazing, you know? When you think about what we’re seeing now, the rise of robotics, that’s starting to transform the workplace. Many factories are being automated. The development of artificial intelligence is developing many different areas. Virtual reality is starting to come on the scene. Now, last week, I was watching the Super Bowl, and there actually were ads for virtual reality. So, we’re seeing that start to hit the consumer market.

So, it really is a great time period, but it also raises good and bad questions for society. Therefore, we need to think about what impact these various emerging technologies are going to have on all the rest of us.

Michael Krigsman: So, what are some of those impacts on society that we’re likely to experience with artificial intelligence, autonomous systems, autonomous vehicles, and so forth? Does anyone want to take a crack at that?

Stephanie Wander: Sure, I’m happy to. I think we, at XPRIZE, really look to AI … Well, we look at it for twofold [reasons]. One is we believe these disruptive technologies will have an incredible impact on our ability to change the world for the better, to actually help create more equity, and to really enable us to personalize solutions so that everybody has access to the best possible solutions for them, whether it’s in health or education.

We certainly are looking at risks like automation and how it might impact the workforce. We’re also really interested in these surprising kind of events that may happen, such as we were looking at tissue engineering and realized that as we get autonomous driving in place, we’re going to have a lack of organs available for transplant - so I’m thinking about how do we get ahead of those problems at XPRIZE, and think about how we plan for that future in the best possible way.

David Bray: So, I’ll build on what Stephanie said, and say my conversations in Taiwan and Australia is really about how do we do the business of a nation? How do we do the business both locally, at the local level? We know with automation and machine learning and AI, there’s going to be huge advantages. We may be able to make sense of data to make communities healthier and safer in ways that humans would just not have all the time to wade through all the data.

At the same time, we know that a lot of what the private sector is aiming for is to automate a lot of these jobs that right now, may have a human in the loop. You’ll actually have greater productivity if the human’s not in the loop, which raises questions about what are the jobs of the future? Will more jobs be created, then destroyed by AI and autonomous systems? And really, what is the type of education needed to make sure we have a workforce that can even be competitive for that future scenario?

Michael Krigsman: Stephanie mentioned this combination of providing greater equity in the world, but at the same time, [there are] risks. It seems to me that this question [of] balance between the possibilities, opportunities, and risks seems to be at the crux of the issue and the questions around AI, as well as the ethical questions.

Darrell West: I think that is the case. By trying to find the proper balance there on technology innovation is the key challenge that awaits us, because when you think about a lot of these technologies, they are going to liberate people, make us more efficient, and free us to do a lot of other things. There are a lot of good things. I personally love technology and the freedom that comes with that.

But then at the same time, there are questions in terms of, is there a possible loss of privacy? We’re going to have billions of sensors, they get deployed in the workplace, in people’s homes, the systems of transportation and energy, and in health care. How are we going to navigate this new world? People are used to dealing with computers kind of … You have a computer, and you work on your tablet and your laptop. Increasingly, computing is going to move to machine-to-machine communication. So, humans are going to be taken out of the equation. However, we have to make sure that when these machines are making decisions, they are respecting basic ethical considerations, making decisions in a non-discriminatory manner, and doing the types of things that we want, as opposed to things that might create problems for us.

Stephanie Wander: I think if we step back for a bit, but I think we’re not even talking about artificial intelligence. We’re talking about a really rapid pace of change in society, and how do we get data, capture data, understand what’s actually happening and understand what kind of impacts we’re even having. And that time to sort of analyze action that’s going to decrease over time. And so, it’s really going to be interesting to see how as a society we manage all of the opportunities and challenges that arise.

David Bray: And if I can echo what both Darrell said, and what Stephanie said, because I think it was great that they’re talking about it. There are huge opportunities and huge benefits, and it really is about the rapid pace of change. If you think about it, when the car first came out, nobody really thought that we’re going to have these challenges of interstate crime, because now you can actually use a vehicle to participate in a crime event that’s not in your locality, and the local police may not know who you are before you return to the scene. That doesn't mean we should roll out cars. We definitely should move forward, and we should try to embrace these things because technology itself is amoral. It's how we choose to use the technology that determines whether it's a good or bad outcome.

 And so what I want to see is what are the conversations we need to have with the rollout of AI and machine learning so that we can be informed of the choices, both as individuals as well as societies. Because, really what we're facing is a rapid change of technology, AI, and how it's impacting individuals, is people are becoming super empowered, but humans themselves aren't going to be really changing. And so, ultimately, what do we do in our world in which people are super-empowered through technology? What does it mean for family lives, work lives, and society?

Michael Krigsman: Is this something that's screaming out for regulation? Does the market regulate it? How do we balance the risks associated with AI and the fact that there may be disproportionate advantages and disadvantages to certain groups inside society?

Darrell West: I think we have to be careful about being heavy-handed in the regulatory process. When you look in the past with emerging technologies, we've often done that. But, with digital technology over the last few decades, we have allowed private sector companies to experiment, to innovate, and to bring new products to the marketplace. Basically, the government role has been building infrastructure and thinking about the broader legal and regulatory environment but trying not to impose too many restrictions, because we want to see what these innovations can do. Now, as we have seen that, we've seen some problems. So, I think it appropriate for agencies to step in and deal with particular issues.

David Bray: I personally am concerned that the pace of technology changes such that top-down is not going to be able to keep up with it. So, I'll agree both with Darrell in terms of you want to see what's possible, but we may even need to rethink as to how do we even begin to address this, just because of the sheer scope of change … If you go to the normal process of view and coming up with some idea and some response, that might be two or three years, and we know that's lifetimes. And Stephanie can maybe speak to it in the next XPRIZE cycles. You probably don't think beyond six or twelve months, just because of the nature of what the technology can do is changing every six months.

Michael Krigsman: Yeah...

Stephanie Wander: That's spot on. I would say when we think about sort of three to five-year time horizons for the most part. I wanted to speak to the other side of this, which is sort of, what do we do about bringing everybody with us on this journey? At XPRIZE, we feel really strongly that the smartest people are out there in the crowd. We're about to see a billion people come online in the next decade, and for us, it's really about ensuring that they have access to education, that we are empowering everybody with tools to be solvers, and I think a lot of this will come down to actually having access to the wealth that's generated in the coming decades, in terms of whether or not we see everyone getting benefit, or whether or not we see a sort of top-down kind of model.

Michael Krigsman: You know, the thing is with this is that it sounds like in order for everybody to get the benefit, it requires social change. So, we’ve got this technology, AI, and technology in and of itself is neutral. But, it has the power to drive so many changes as David was saying: economic change, social change, cultural change … And so, how do we manage through this transition period, which may be a lengthy one as well?

Darrell West: I mean, the key thing I think, at this stage, is really digital access. We still have about 20% of Americans that are not online, and therefore not able to share the benefits of this amazing technology revolution that is taking place, but even among the 80% that does have access to the internet, some of them have slow speeds so they can take advantage of the latest developments. So, I think one key challenge for all of us, is to increase access in a way that allows everybody to take advantage of the things that are taking place. And then, when you look internationally, as Stephanie mentioned, there are billions of people who have no access to a technology. So, many of those people are located in India and China, and in various parts of Africa. Therefore, we’re working on ways to improve the digital access in those parts of the world as well.

David Bray: I would almost equate it to how the industrial revolution took about a hundred years. I think we are going to compress that, and I'm just going to put out an estimate. It's going to be a hundred years of change, compared to the industrial revolution, in less than 20 years. And, if you think about where we started in the 1800's, 95% of people never went beyond a five-mile radius of where they were born. And then, at the end of the industrial revolution, now we travel over both oceans as well as across large continents. And so, you're right, Michael, that this is going to change how we live, how we work, how we experience.

Supposedly, according to historians, the way we dealt with the industrial revolution, how we coped with it was through alcohol. Now I’m not saying we’re going to cope through the AI revolution through alcohol, but humans are going to need some safety valve. Maybe it is going to virtual reality, or maybe it is augmented reality or some other way to help us through coping where we actually have a little bit of fun with the technology, but we also recognize the way we fundamentally live, that we are no longer just going to live in the five mile radius where we were born.

That same sort of thing is going to happen where that same thing is going to happen that it’s not going to necessarily have just one job, or even that you’re doing the job by yourself - that you’re now doing with both the collective intelligence of both machines and humans working in ways that we could even comprehend, today.

Michael Krigsman: How do we avoid the problems that we currently have? I mean, even in this country, resulting from globalization? Because for example, people who for example are working in a factory, where the jobs have been displaced, and those folks are taking the brunt of the broader economic benefits that are accruing to the country from globalization. So there's a disproportionately negative effect on that particular group of people, and it's expressed itself in politics, today.

David Bray: I will avoid the political side, because I'm nonpartisan, but I will say you've hit the nail on the head, and this is in all respects to anyone who's an economist. Economics was developed at a time when you couldn't know all the economic thoughts that were going on in the world in real-time, and so it is an approximation of what we think human behavior is. But, it's really not a science. And, in fact, there's a wonderful article from the early 2000's in the American Economic Journal that takes ten classic predictions from game theory and looks at how people actually behave, and it turns out it only has about a 30 accuracy to their actual behavior.

And so, it may very well be that we have made guesses on policy in the past that were not based on actual empirical evidence as to whether this would create jobs or not, or whether this would create people’s livelihoods or not. And we’re now facing the fact that there may be this low-tide of globalization that workers that are in a country with strong currency cannot compete as much as countries with weaker currencies. Now, I’m not saying we should devalue our currency, but that does raise a question of if we made decisions in the past that were anecdote-based, as opposed to empirical science-based.

Maybe now with the Internet of Things and machine learning, we can look at what people’s economic decisions are around the world in near real-time. We can see what would actually trigger and stimulate more jobs at the local level in rural areas that maybe are losing jobs at the moment, and actually have it be evidence-based job creation as opposed to anecdote-based job creation.

Michael Krigsman: Darrell, what’s going on around the world? You spent a lot of your time in other countries and so, how are other countries thinking about this very difficult set of issues?

Darrell West: Every country in the world is trying to figure out exactly what the policy should be and what type of encouragement they should give to build a pro-innovation based economy. They want to get the advantages that they see taking place in the United States and in Europe. So recently, I've been in Singapore, which is a hotbed of technology innovation. Singapore actually is a global leader in many aspects of new technologies. I've been to China; they're trying to figure out how to take advantage of these trends. They see a technology as a big driver of the next stage of economic development, and they want to make sure that their companies are at the leading edge of this.

But, you know, it's complicated for every country, because they look at the United States and especially Silicon Valley and they say, "Oh, we'd like to have a Silicon Valley in our country as well." But it's been virtually impossible for other countries to replicate that model because the United States has this particular blend of educational institutions, the ability to raise capital through venture firms and otherwise, and then a regulatory process that has been pretty light touched that has allowed these firms to innovate. So, other countries are trying to find their own particular niches, so that they can build a twenty-first-century economy.

Michael Krigsman: Darrell, I know you have focused very much on autonomous systems like autonomous vehicles. How does that breakout, as opposed to the broader sector AI?

Darrell West: We have done work on autonomous vehicles. We put out a paper on this looking at the development of autonomous vehicles in China, Europe, Japan, Korea, and in the United States. And, virtually every region and every major car manufacturer around the world is interested in this new technology and spending millions and billions of dollars trying to promote it. So, everybody is interested in this.

This is a revolution that is probably going to take place much more rapidly than many people realize. Most of the major car companies are aiming to roll out actual autonomous vehicles by 2020, so that’s not very far away. We’re already starting to see it in the taxi area, and in the car sharing business. Another sector likely to be disrupted is truck driving and delivery systems. There is a lot of experimentation taking place there, so both in the United States, China, and other places, there’s a lot of enthusiasm about this because they see this technology as developing very rapidly, and they’re poised to deploy this commercially.

Stephanie Wander: And just to add to that too: I’ve encouraged people to really pay attention to autonomous flight, as well. There’s actually quite a few companies building electrical, highly efficient aerial vehicles for human transport, so pay attention to Uber, pay attention to a company called EHANG; there’s just a lot of interesting stuff that we’ll probably see a lot sooner than we think. There are going to be some major regulatory issues for them to get through, but from a technology standpoint, they’re very close to having a personal flight in our lifetime.

Michael Krigsman: We have a number of different, we could say, application areas of AI, and autonomous systems, and machine learning. Are the ethical and policy issues distinctly different for each of these? How do we address that aspect?

David Bray: So, I personally would advocate context, context, context. I do think context does matter. That said, I do think we need to approach it first and foremost with almost a human-centered approach regarding, "Is this giving more freedom, more autonomy?" as Darrell said? Technology is great, in that it gives people freedom, so we need to think about continuing to provide people more freedom.

So, at the same time, freedoms provided to the individual, what are the possibilities in what they could do to other individuals as well? So I almost think we need to take the Golden Rule of, "Do unto others as you would have them do unto you," and maybe update it for the technology era, which is, "Have the AI, have the machine learning be such that it allows you to do unto others as they would permit you to do unto them." So, we need to be able to express in a way that's not checking a thousand boxes, or trying to change our privacy settings or something like that, but that you can express what you're preferences done to both your person, as well as your digital self. And then, the machine and the AI respects that.

Darrell West: And to add to what David was saying, I think when we look at autonomous vehicles, the legal and regulatory challenges are enormous, just because the transformation and the impact on people’s lives are going to be very enormous. For example, when we’re doing our research on autonomous vehicles, I was surprised to discover that fully autonomous vehicles collect over a hundred thousand data points. People have no idea how many sensors there are and what kind of information is being measured. Autonomous vehicles have sensors that measure what’s going on in the engine, the speed, and how you’re dealing with various things that you encounter. People are going to be surfing online and texting while they’re riding in fully autonomous vehicles. They’re often doing so over the automobile’s WiFi system, so basically everything you’re doing, which you think is a personal act, is going to be captured.

So, the question is how do we deal with that, and who has access to that information? Insurers, of course, love this because now when they're offering safe driver discounts, they're basically having to take your word that you're respecting the speed limit and driving safely. In the future, they may be able to get access to your car's actual data to find out, are you speeding? Are you going over the speed limit? Are you driving drunk, you know. There are sensors that can measure your alcohol level that is used as you sit in the car seat. You know, who's going to have access to this information? Who owns it, and to what purposes are they going to use this information?

Michael Krigsman: We have an interesting comment from Twitter, and I’m hoping I’m going to pronounce his name correctly. Ergun Ekichi, I probably pronounced it wrong. Anyway, he makes the comment that with the increased adoption of AI, technology is changing the way enterprises engage and understand customers, and I think that there’s a real possibility for that exact same thing to happen inside the public domain, with the relationship between governments, policies, and citizens. So, any thoughts on this?

Darrell West: Well, there's always a potential that technology can bring citizens closer to the government in the sense that if you have complaints about garbage collection in your neighborhood, you often feel that government is remote, they're not responsive, and they don't really address your problems. But, through some of the smart city applications that are coming online, it's possible to make that complaint for the city agency to deal with that, and for you in real-time to be able to track what they are doing and how they are responding to your particular problems. So, there's a potential of really good things coming out of this. There's a lot of citizen cynicism today; people feel that government's not very responsive to what people want. Technology may end up being part of the solution for that.

David Bray: I will build on that real quick and just say, I think with machine learning and AI, there’s an opportunity to both ideally bring citizens and public service closer together, but maybe even re-envision how we actually even do public service. If you’re thinking about it, when I talk to audiences, I like to ask them to raise their hand and say, “Do you have in your pocket the ability to call anyone in the United States, anyplace, anytime?”, and most of them raise their hands if they’ve got a smartphone or a cell phone. But, I say, “Did you have that twenty years ago?” and most people did not.

And so, the same thing with machine learning and AI, we may even be able to sort of stop doing things that require a government professional to do them, and instead, maybe be able to think about things that can be done by the public directly. I mean, even if you saw something that was pollution in your area, or you saw traffic in a dangerous road construction in your area, would you be willing if the data was kept private and kept anonymous to share that data to inform public service to fix the problem? Probably you would if you were assured it would be kept private. And so things that in the past required government workers to spot the problems to try and fix them, the public could, maybe if they're concerned at a local level about making their communities healthier and safer.

And similarly, things that had to be just because of the time it took for something to go from Topeka, Kansas to DC was four days on horseback, maybe one-hundred-and-fifty years ago - now, it's milliseconds. And so, there may also be public-private partnerships, and that's why I like to say, "government" is an increasingly outdated term. What we really should be using is the word, "public service" that includes members of the public, as well as public-private partnerships, and then, government workers.

Michael Krigsman: So, one of the themes that have come out so far, during this conversation, is this notion of equitable access to resources, and also the notion of partnership. David, you just mentioned public-private partnerships. And Arsalan Khan asks a very, very interesting question on Twitter that I think hinges on the notion of bias in the data. With AI, if your data is biased, you’re going to have biased outcomes, and equitable access and results depend on impartiality. So, how do you think about this issue of bias?

Darrell West: This has been a risky area for some of these emerging technologies. So for example, there already have been some issues where technology, instead of playing to our best instincts, allows people to play to their worst instincts. So, for example, on car sharing services, on AirBnB and so on, there has been some evidence that sometimes if you see a picture of that African American that wants to rent in your home or get in your vehicle, there's some evidence that drivers are a little less likely to pick up a minority rider. So, that's an example of where the technology itself is neutral, but the way the people use it isn't necessarily so neutral. So, we have to be careful that as these artificial intelligence systems develop as data analytics take place, as we see a big increase in machine and machine communications - that the technology is respecting the values that we care about; that it doesn't allow us to discriminate, it doesn't allow us to act unfairly, it doesn't allow us to play to our own worst instincts.

Stephanie Wander: Yeah, I think just to build on that: I mean, what you sort of [...] Michael, is really the double-edged sword of artificial intelligence. It's the idea that we actually have technology that can help make decisions for us, or can personalize things. Of course, humans have an implicit bias, so our data will also be biased, or those decisions that even machines make for us may have some bias in them. I think it's going to be the question of our age, which is how do we really enable our decisions to be outsourced to improve our lives? But then, how do we also sort of manage that, and ensure that we're getting exposure? I think we have much more of a sort of opt-in culture to the world's knowledge, and I think there's also a potential scenario where we start capturing a lot of information and kind of lose it as a society. It's captured, but we don't choose to look at or question anymore of it because it has this sort of obsolescence that has come across. I think it really is a tough, tough question and I think we'll spend a lot of time talking about it in the future.

Michael Krigsman: In a way, it’s different, but we’re dealing with the issues today of privacy and data collection. It’s not exactly the same, but in terms of the pervasive collection of data, and then what do third-parties such as [other] companies and the companies do with that data? It seems like drilling down into this is one of the most basic ethical issues associated with AI, at least in the public sector for sure.

David Bray: So, I would say the computer science mantra of garbage-in is garbage-out, what we really need to think about is can we have something where people can be at least aware of the data that is being fed to teach the machine, and inform it? And then can we even also maybe have a machine that is actually almost watching what other AIs reach conclusions to, and point out if it observes biases? For example, California right now is actually trying to use machine learning and AI to help set bail decisions. The challenge is initially it was fed historical bail decisions, and when it was said that the vertex it started to make on bail decisions realized there was a bias in those past bail decisions. It saw things like, it was taking someone’s gender into account, or their race, or their height, or their weight, which really should not matter when you’re trying to set a bail decision.

So, I agree that, in some respects, it's almost like if you go back to James Madison, 1788, in the Federalist Papers where he said, "What is government, but the greatest reflection of humanity? If all men were angels, no government would be necessary." Let's just replace the word "government" with AI, and say, "What is artificial intelligence, but the greatest reflection of humanity? If all men and women were angels, we wouldn't need AI." And so, it's going to reflect us, but it may also be able to ... I'd actually love to see, and unfortunately I don't know if I can convince Stephanie at XPRIZE to do this; it would be great to have a challenge of a machine that would actually help point out our own biases as an individual, so at least we're aware of them, and then we can try and figure out how we're going to address them.

Michael Krigsman: We have a very interesting question from Twitter. Chris Curran is a partner at PWC and has been doing digital transformation and CIO-related research for the many years that I’ve known him. He asks, “How do you determine if a large machine learning training dataset is biased?” I mean, that’s a really tough question!

Darrell West: Yeah, I think that's a great question, and I think the answer comes down to open data. I mean, these technology systems are generating extraordinary amounts of information, and this information can allow decision-makers to make decisions in real-time. You know, we're used to research projects that take days, weeks, or months to collect data, to analyze the data, and to report back. Well, in the digital world, we can actually get those data in real-time, analyze them almost immediately, and be able to act on the latest information. But, as Chris suggests, it's a tricky issue that when you're kind of analyzing material in real-time: how do you make sure that the information is there, that it's accurate, that it's being used for good purposes, and that it doesn't enable discrimination on the part of people who hold various points of view? So, that is the real challenge.

But, if we make the data open and accessible to researchers, that acts as some check on what's going on in those systems because researchers can look at data. We've already seen several examples where this is taking place, and the researchers have identified some problems there. So I think ultimately, that's a way to build in some safeguards and make sure that these systems are service us, as opposed to more nefarious practices.

Michael Krigsman: What are some of the safeguards that need to be built in to help the public sector keep up with that rate of change that we are talking about, and help to support the growth of technology such as AI; but at the same time, ensure that the public interest is met along with it?

David Bray: So I think we need safe spaces to experiment. The challenge that I think we're facing in public service is obviously we have tight funding constraints. We now also have a hiring freeze. And so, I think the question is, where are those places that we need to make sure the trains keep on running on time, that things are 99.9% up? Those are things that maybe you can't experiment with yet. But at the same time, if you don't find anything that you can experiment to try and use AI more, use machine learning more, to press the envelope, then we're going to quickly find that we fall behind and we're out of date.

So, we need these safe spaces. Just like we have a Defence Advanced Research Projects Agency for the Department of Defense to help keep them abreast of what's going on with technology change and how we can bring that into their mission, we may need to have a civilian equivalent of an advanced research projects agency where agencies and departments could bring their thorny problems where what they're doing right now could be better, could be an exponential leap in serving the public better. And, at the same time, they may not know necessarily how to do an answer, they're going to have to do an experimental approach. What I would love to see is this group reach out to the private sector; reach out to individuals of the public and say, "Here are our real thorny problems." Maybe it is. We need to actually have a better data and a better science of how to create jobs. Do people have an idea as to how we can do that, and then invest in those ideas and see what works?

And what I would really love is to see not specific to any one department or agency, because in some respects I think we need to defragment that because the problems we now face span multiple agencies and departments; but instead maybe be at most three or five top big issues that cut across all of government at the local level, state level, and federal level, bring those problems to bear, and then use the power of We the People to pitch ideas as to how we’re going to bring AI and machine learning to help solve those issues.

Darrell West: And to add to what David just said, I think the other thing that we should be thinking about is just improving the level of transparency in how artificial intelligence operates so that all of us have a sense of what bases these systems are making decisions. Like right now, AI is a big, black box. There are algorithms and millions of lines of software code. We don't necessarily know what dimensions are being put into those things, so some people have suggested that a little more transparency just about how those algorithms are operating, what the bases are for these artificial intelligence systems, how machines are trying to learn from the big data that is being generated. All of those things would make a big difference in making people more comfortable with some of these emerging technologies.

Michael Krigsman: But Darrell, again, when you clearly what you’re saying is right. But the moment you start talking about transparency, then there are people who, especially in the commercial sector, that will stand up and say, “Wait a second. These are our proprietary trade secrets.” So, you get right back into the crux of the problem of balancing public interest against private need.

Darrell West: Absolutely. And that is a very crucial question, and certainly, you know, companies should be allowed to have some proprietary information. I mean, there’s a long history throughout the world of trying to protect that. With emerging technologies, we also need to understand that the social and economic impact on the rest of us is so extraordinary that we, as a society, do have some vested interest. You know, we don’t need to know all of the proprietary information that is there, but just giving us some sense of how these systems are operating, what are the fundamental decision points that are being made, and how various ethical dilemmas are being handled. I think there is a social good that comes out of that type of information.

David Bray: And I would actually add to Darrell and say that one of the good things about public service is that it's not in competition, whereas in the private sector, you keep things secret because maybe that's your intellectual property, maybe that's your trade secret, maybe that's your competitive edge. And so with public service, I think we can ask for more transparency and openness that we may not necessarily be able to expect in the private sector because really it is there to serve the people. And what we may be looking at in the next ten, twenty, thirty years, is to which degree is a nation transparent about what the machines are making sense of, what data they're ingesting, and what decisions they're making on the data. Maybe you can't reveal the complete intricacies because there is privacy associated with the data, maybe there's one-way hashes with the data so you can't figure out specifically who it's talking about or the people's talking about, but you can at least express what decisions are being made; what are the outcomes that are being decided; what data's being ingested; etc.

And maybe we also need to think about, for public service, something that’s akin to a credit report, where you can actually sort of say what data is being used on me across the different departments and agencies. Can I verify that all the data is correct? And then [number] two, have I given consent that I made an informed choice for that data to be used for that purpose [and] for that outcome?

Michael Krigsman: Are there examples of countries around the world who have made greater progress than the U.S. in terms of grappling with these issues?

Darrell West: Certainly, there’s a lot of variation in how different countries are handling these types of issues. So for example, the European Union has been very tough on privacy considerations, so they’ve gone further in terms of wanting to look inside the black box of artificial intelligence, developing very strong privacy rules, respecting the idea that people own their own data, and they have a right to control how that information is used.

The United States tends to be a little more libertarian and hands-off in thinking about those issues. I mean, we talk a lot about privacy, but a lot of the privacy rules still are voluntary in nature and developed by companies as opposed to being imposed through government regulations.

In Asia, there’s quite a bit of variation in how important privacy is, in particular, countries and how much ownership people have over their own information. So, I think every country is struggling with these issues. Countries are kind of finding that balance and drawing that line in different ways, depending on their own histories, their own backgrounds, and their own values.

Michael Krigsman: We have just about five minutes left; a little less than five minutes left. I thought it would be interesting to ask each one of you, and Stephanie Wander had to do the drop-off, so …

David Bray: Wander off…

Michael Krigsman: Stephanie Wander had to wander off. [Laughter]

David Bray: Sorry.

Michael Krigsman: So we’re talking with David Bray, who is an Eisenhower Fellow, an executive-in-residence at Harvard, and the CIO of the Federal Communications Commission, and we’re talking with Darrell West, who is with the Brookings Institution.

Just in the last few minutes, could each one of you offer your thoughts or prescriptions for how we balance - the summary - how do we balance these various competing interests, in order to allow AI to move forward, but in a way that is supporting the common good, and not detrimental to the common good?

David Bray: So, yes I’ll jump in first, and actually one thing that we didn’t get a chance to talk about was the news that happened last week, that an AI/machine learning algorithm beat five of the world’s top poker players after twenty weeks of training the machine. And this was after twenty weeks of playing multiple rounds of upwards of fifteen and twenty rounds per day with these top five poker players, and it learned every night about its new strategy. And poker’s interesting because it’s bluffing, and so we now have an AI that demonstrated it can out-bluff five of the world’s top bluffer poker players, which raises an interesting question about, is it ethical for a machine to do bluffing? Is it ethical to do deception? If you go negotiate the price for your car, and maybe you’re not a good negotiator, would you want to have an app for that, that will negotiate on your behalf with the dealer as opposed to doing it by yourself?

That, I think, raises huge issues that the future is now, and it’s coming at an accelerating rate; a very fast rate. And so, I would say my three recommendations would be first, where are the safe spaces to experiment at the local level, as well as the national level and the global level? Because you can’t even begin to tackle and approach any type of making sense of policy or something like this, until you experimented with it and you tried to use it.

Two, as Darrell said, and then Stephanie said, try to be as open as possible about the data that’s being used, as well as what the algorithm is doing.

And then Three, I really do think [you should] Take the Golden Rule, "Do unto others as you would have done unto you." I would date it for the 21st Century, which is, "Do unto others as they would commit to being done unto them." And I think that's really what we need to think about going forward.

Michael Krigsman: Fantastic. And Darrell West, your closing thoughts.

Darrell West: Yeah, just quickly because we're running out of time. I see extraordinary advances coming in artificial intelligence are being deployed in terms of transportation, energy savings, resource management, health care, education, and in many different areas. And I think there are going to be great benefits that come out of this. The key is to make sure we keep the balance right and to make sure that societal interests get represented so, a little more transparency I think would be helpful. Making sure that there are antidiscrimination rules and norms that are put into place, and then just making sure that these systems conform to the basic values that exist in any particular society. I think that would help us get the advantages of technology without suffering a downside.

Michael Krigsman: Okay! What a very fast conversation that has been! You’ve been watching Episode #218 of CxOTalk, and we’ve been talking about AI, the ethical issues, the governance, policy issues, and especially what’s happening around the world with some of these things. And, we’ve been talking with David Bray, Darrell West, and Stephanie Wander, who just had to drop off a few minutes ago. Thanks, everybody for watching. Next week, we have another really awesome show, so I hope you’ll join us. Bye-bye!