Data and automation have the power to transform business and society. The impact of data on our lives will be profound as industry and the government make greater use of techniques such as artificial intelligence and machine learning. Explore this important topic with two world experts.
Data and automation have the power to transform business and society. The impact of data on our lives will be profound as industry and the government make greater use of techniques such as artificial intelligence and machine learning. Explore this important topic with two world experts.
Dr. David A. Bray began work in public service at age 15, later serving in the private sector before returning as IT Chief for the CDC’s Bioterrorism Preparedness and Response Program during 9/11; volunteering to deploy to Afghanistan to “think differently” on military and humanitarian issues; and serving as a Senior Executive advocating for increased information interoperability, cybersecurity, and civil liberty protections. He completed a PhD in from Emory University’s business school and two post-docs at MIT and Harvard. He serves as a Visiting Executive In-Residence at Harvard University, a member of the Council on Foreign Relations, and a Visiting Associate at the University of Oxford. He has received both the Arthur S, Flemming Award and Roger W. Jones Award for Executive Leadership. In 2016, Business Insider named him one of the top “24 Americans Who Are Changing the World”. He is Chief Information Officer of the Federal Communications Commission.
Dr. Michael Chui is a partner at the McKinsey Global Institute (MGI), McKinsey's business and economics research arm. He leads research on the impact of information technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as Big Data, Web 2.0 and collaboration technologies, and the Internet of Things. Michael is a frequent speaker at major global conferences, and his research has been cited in leading publications around the world. His PhD dissertation, entitled "I Still Haven't Found What I'm Looking For: Web Searching as Query Refinement," examined Web user search behaviors and the usability of Web search engines.
Michael Krigsman: Welcome to Episode #219 of CxOTalk. I'm Michael Krigsman, an industry analyst, and your host. Today, we have such an interesting show. We're going to be talking about data, automation, machines, machine learning, AI, and the role of all of this in business today and what it means for the future.
Our guests are David Bray, who is … Well, David has been on the show a number of times; and David, why don’t you introduce yourself?
David Bray: I am David Bray, I refer to myself as a digital diplomat and human flight jacket, otherwise known as Chief Information Officer at the Federal Communications Commission. I should also mention that I'm an Eisenhower Fellow to Australia and Taiwan, which means I met with their government and industry leaders in both countries about what their plans are for the Internet of Things; and then briefly just mention how ten years ago, I met Michael Chui at the Oxford Internet Institute. We were actually working together on how one could best do what was called "distributive problem solving": how you could bring human and technology nodes, reach better outcomes and organizations, and so that's why it's so great to be here with Michael again talking about artificial intelligence.
Michael Krigsman: Fantastic! And, I met Michael Chui, who is with the McKinsey Global Institute through David. And, just after we arranged for Michael to be on this show, I saw him on CNBC in the most interesting segment. So, Michael Chui, welcome to CxOTalk, and please tell us briefly about yourself and what you do at McKinsey.
Michael Chui: Sure, delighted to. Like David, I was once a CIO of a public sector organization, but it was much smaller. It was a municipality. But now that I’m a partner in McKinsey Global Institute, I need some of our firm’s research. It’s part of the larger McKinsey company, a management consulting firm. At least some of our research on the impact of long-term technology trends were including some of the distributed problem-solving that David mentioned, but also artificial intelligence, robotics, and AI.
Michael Krigsman: Michael, you have been studying data analytics. You came out with a very rich report; a lengthy, deep, and interesting report; last December. Please share your view with us on data and digital disruption.
Michael Chui: Well, one of the things that we've been studying for quite some time is the potential impact of using data and analytics to change organizations and society. It's said, for instance, disrupting either industries or organizations … Our first publication was actually five years ago. It's the report on Big Data, and to an extent, the report that we published in December is a sequel. It's a good thing in movies, and sometimes it's a good thing in research, too.
One of the things we identified back in 2011 was the tremendous potential of using data and analytics to really change the game. And we looked at the public sector in Europe, we looked at health care in the United States, we looked at manufacturing and retail, and location-based services and said: In each of these domains, if you use data and analytics, either as a basis of competition, or as a way to increase the efficiency or effectiveness of what you’re doing, or change business models; we saw the potential for all kinds of good things to happen, both for the organizations themselves, whether or not it’s retail or they’re trying to sell more, but also a health care provider trying to improve healthcare outcomes of a country or an organization, etc.
We saw all kinds of potential. We said, “That could potentially happen within ten years.” And five years on, we said, “Let’s see how things are going. Let’s see how much value has been captured out of the billions of dollars of potential value.” And by the way, that’s not just profits, it’s improved healthcare outcomes, better services provided to citizens, etc.
And quite frankly, what we found in this last piece of research amongst the many findings that we discovered: When we had that sort of “Look into the rear-view mirror,” we said that the report card was actually quite mixed. Some organizations have really expanded their ability to use data and analytics, and some sectors have moved farther than others. And to be frank, those are sectors in which you had, in many cases, a digital native competitor you had to compete against. So retailers had moved along farther.
There are other sectors and organizations which, quite frankly, have been, on average, farther behind, capturing only about 10 or 30% of potential value. Unfortunately, some of those have been public sector. But again, there is great individual variation.
The other thing we found is the spread in performance between not only sectors but individual organizations, and I know David's been doing terrific things as the US FCC; and again, some of the organizations that have been doing the most have really extended their lead versus median or even lagging organizations.
Michael Krigsman: So, David, your view is a public policy view, especially when you wear your Eisenhower Fellow hat, so any thoughts on this from a public perspective?
David Bray: So, I agree 100% with what Michael said, that different sectors are embracing the opportunity provided by data analytics, artificial intelligence; and that it does seem to be, for those sectors where there is a digital native incumbent or an organization that is either a startup or already present, that has embraced the digital regime; that pressures the rest of those organizations to move along.
The public sector, on the other hand, you have the challenges of not just in the United States but around the world, governments are facing pressures to do more with less. Here in the United States, we had sequestration; but also talking to Australia, when I met with them, their public sector leaders were facing the possibility of a recession. In Taiwan, the economy was not growing as fast as it had previously. And so, here you have shrinking budgets, but at the same time, you are being asked to transform how you do your work; and so, it’s the challenge of how do you leapfrog from legacy investments in IT? You can’t be a startup, because you have to keep things running, and you have to keep on serving the public. At the same time, if you keep on doing things on-premise with legacy IT that’s on average five to ten years old, you won’t be able to get where you need to go.
And so, at the FCC, when I arrived three years ago, it was an interesting situation where they had nine CIOs in eight years, which I said was a great sign for CIO #10 that things are just going great; and I quickly surveyed that they had more than 207 different IT systems that were on average more than ten years old - we even had some that were approaching nineteen or twenty years of age - and they were consuming more than 85% of our budget, just to maintain those systems.
And so, that's where I said, "In two years or less," at the time, it was rather ambitious [proposal], and I think I had a lot of people that were a little surprised, "In two years or less, we want to have nothing on-premise at the FCC. We want to go straight to either public cloud, or a commercial service provider, because you cannot capture the benefits of data analytics, artificial intelligence, the Internet of Things, and making sense of the data that's coming from them if you are still tied to legacy IT."
And the good news is, two years later, we did it. We reduced our spending from 85% to 50%, and in a lot of ways, it was just setting the scene for getting ready for making sense of all these widespread data sources, making sense of what can be brought in from machine learning and AI.
Michael Krigsman: Michael, how do organizations make the decision to invest, and where should they invest? How do AI and machine learning come into play in a practical sense, as opposed to all the media hype or science fiction? How do we become practical about it?
Michael Chui: Well, a few things. I think one of the things that have happened over the years that we've been doing research on, as well as more organizations, have started to understand the potential of data analytics, and then those applications to these techniques - AI and machine learning - is that awareness has certainly grown. In fact, amongst what's called "executives," whether they be public sector or private sector, it's actually … They're starting to understand that, in fact, data and analytics are either becoming a basis of competition or a basis of providing the services and products that your customers, citizens, and stakeholders need.
Thing is, we've reached that level of awareness, but as David and I have talked about in terms of actually what has been able to be captured, what sorts of value has actually been able to be generated, comes about for a number of reasons. And I think as we looked at organizations all around the world if you ask and executive or leader there, "Are you thinking about data? Are you thinking about analytics, and are you doing anything?", and almost everyone says, "Yes." And many would say, "Oh, we've got a successful pilot where we conducted this experiment. We've invested in the technology, we've invested in hiring some data scientists and analysts, etc. for software and hardware; on Cloud, or on-prem, what-have-you, while doing that transition."
What we found oftentimes: While there are oftentimes real technology challenges which take real investment, and time, and energy; and as a technologist myself; it’s very interesting to talk about those things. Oftentimes what we find the real barrier is, is around the people stuff. It is how do you get from an interesting experiment where there’s a business-relevant insight? We could increase the conversion rate by X percentage if we actually used this next product to buy an algorithm and this data, we could reduce the maintenance costs, or increase the uptime of this whole good. We could, in fact, bring more people into this public service because we can identify them better.
But, getting from that insight to really capture value at scale, is where we've started to find oftentimes organizations are either stuck or falling down. And, it really has to do with how do you bag that interesting insight, that thing that you capture, whether in it's in the form of some sort of machine learning algorithm, whether it's other types of analytics, into the practices and processes of an organization, so it really changes the way things operate at scale? To use a military metaphor: How do you steer that aircraft carrier? It's as true for freight ships as it is for military ships. They are hard things to turn.
And that organizational challenge of understanding the mindsets, having the right talent in place, and then really changing the practices at scale, I think that’s where we’re seeing a big difference between those organizations who have just reached awareness and maybe done something interesting, ones who have radically changed their performance in a positive way through data, analytics, and AI.
Michael Krigsman: I want to remind everybody that you’re watching CxOTalk. And, right now, there is a tweet chat that is taking place on Twitter, of course, using the hashtag #cxotalk. You can ask questions to our guests directly and they will answer.
David Bray: [Laughter]
Michael Krigsman: [Laughter] Well, we hope they’ll answer.
So, David Bray, Michael Chui was saying that the barrier to adoption is the people. Now, in the realm of AI and machine learning, how does this particular issue play out? Is there anything that’s unique about AI and machine learning that we need to be considering when we talk about adoption and the proliferation of these technologies in the enterprise in a meaningful way?
David Bray: So, that’s a really great question. First, I would say that I emphatically agree with what Michael was saying, that the real secret to success is changing what people do in an organization, that you can’t just roll out technology and say, “We’ve gone digital, but we didn’t change any of our business processes,” and expect to have any great outcomes. I have similarly seen, both in the private sector, and in the public sector, here in the United States, here in the Federal government, but also in other countries like Australia, Taiwan, and other places in Europe, where they’ll do experiments that are isolated from the rest of public service; and they say, “Well look, we’re doing these experiments over here!”, but they’re never translating to changing how you do the business of public service at scale.
And to do that requires not technology, but understanding the narrative of how the current processes work, why they’re being done that way in an organization, and then what is the to-be state, and how are you going to be that leader that shepherds the change from the as-is to the to-be state? For public service, we’re probably lacking conversations right now about how to dramatically deliver results differently and better to the public.
Now, for artificial intelligence, in some respects, it’s just a continuation of predictive analytics, a continuation of Big Data, it really is nothing new in terms of the fact that technology always changes the Art of the Possible, this is just a new Art of the Possible. I do think there’s an interesting thing in which it could offer a reflection of our own biases through artificial intelligence. If we’re not careful, we’ll roll out artificial intelligence, populating it with data from humans, [and] we know humans have biases and we’ll find out that the artificial intelligence itself, the machine learning itself, is biased.
At the same time, we could actually use it to say, “Look for biases and past outcomes, past decisions, past performance with this organization, and let us know where things weren’t exactly either as equitable or as beneficial as it could be. So AI could either A) be a dangerous tool where it just reflects and augments and amplifies human biases, or it gives us a chance to look in the mirror and say, “You know, you’re being biased when you make these decisions or these outcomes.” And I think that’s a little bit more unique than just a predictive analytics bias or Big Data.
Michael Krigsman: Michael, let’s drill down now into AI a little bit more deeply, and machine learning. In your research, what are some of the business areas that are today most well-suited, and where do you see this going?
Michael Chui: So, we did do some research in terms of trying to understand where there’s the greatest potential from some of these technologies. Like one of the interesting things that we discovered was … Well first of all, our hypothesis was as we look across about ten different industry sectors and then a dozen different types of problems in each, we expected, quite frankly, that much as we find for other techniques, the most of the value will be concentrated - 80% of the value coming from 20% of the problems, or something along those lines.
When we surveyed about 600 different industry experts, every single one of those problems we identified, at least one expert suggested it was one of the top three problems that machine learning could actually help improve. And so, what that actually says is the scope for potential is just absolutely huge. There’s almost no problem where AI and machine learning potentially couldn’t impact and improve performance.
A few things that come to mind: One is a lot of the most interesting and recent research has been in this field called “Deep Learning,” and that’s particularly suited for certain types of problems with pattern recognition, oftentimes images, etc. And so those problems that are somewhat similar to image recognition, pattern recognition, etc. are some of those that are quite amenable and interesting.
So again, in terms of very specific types of problems, predictive maintenance is huge. The ability to keep something from breaking, so rather than waiting until it breaks and then fixing it, the ability to predict when something's going to break not only because it reduces the cost, but because it's cheaper, in fact, to try and keep something from breaking rather than paying someone to fix it. The more important thing is the thing doesn't go down, so if you bring down a part of an assembly line, you bring down the entire factory or oftentimes the entire line. So being able to avoid that cost is the same kind of thing as a jet engine on a wing of an airplane, etc.
And so, to a certain extent, that is an example of pattern matching. When you have all of these sensors, which are the signals that actually reflect that something’s going to break and you should go and do some predictive things? And so, we find that across a huge number of specific industries that have these capital assets, whether it’s a generator, a building, an HDC system, or a vehicle, where if you’re able to predict ahead of time before something’s going to break, you should actually conduct some maintenance. That is one of the areas in which machine learning can be quite powerful.
But the other thing is, again, taking this idea that you have one problem with one area, you look for analogous problems. If you take health care as another case of predictive maintenance on the human capital asset, then you can start to think, "Well gosh! I have had the Internet of Things, I have sensors on a patient's body; can I tell before they're going to have a cardiac incident? Can I tell before someone's going to have a diabetic incident, that in fact, they should take some actions which could be relatively less expensive, and less invasive, than having that turn into an emergent case where they have to go through a very expensive, painful, and urgent care type of situation? Again, can you use machine learning to be able to do prediction? And those are some of the things that we're starting to see in terms of problems that can potentially be better solved by using AI and machine learning.
Michael Krigsman: David, a practical question from you, and then we have a few questions from Twitter. So, as a CIO, how much are you thinking about AI, machine learning, and predictive analytics in the operations of your organization?
David Bray: So, right now, I actually have an ask out to all the eighteen different bureaus and offices of the Federal Communications Commission to identify a bureau or office challenge or problem involving the public that they would like to have machine learning and artificial intelligence brought to bear. Woe be to the CIO that tries to force a solution onto a bureau or office that's not ready for it yet. And so, this is trying to see if they are receptive if they can spot something. And maybe it is identifying where we can provide it, as just Michael mentioned, preventative maintenance of services that can actually benefit the organization and benefit the public, making sense of comments that we receive.
We did actually, back in 2014, make public comments we received on a specific issue that involved four million comments, with the idea that we actually wanted to allow the public to use tools to bear, to make sense of them: sentiment analysis, understanding what was either a "for" or "against" proposition. And I think in some respects, public service has the opportunity that it's not necessarily in competition with any organization, we could actually make our data available; recognizing we need to protect privacy; but once we protect privacy, making that data available to the public sector and the private sector to make sense of it. We don’t have to do it by ourselves. And so, I think the opportunity for artificial intelligence and machine learning is what are those things - it’s a little bit harder at a national level - that will benefit the public?
I think a lot of things are going to happen first in cities. I mean, we’ve heard talk about smart cities. There, you can easily see where if you can actually have preventative maintenance on a road, or better providing of power, and actually monitoring it to avoid brownouts. I think actually the real practical, initial, early adopters of AI and machine learning are going to happen first at the city level in some respects, and then we’ve got to figure out how we can best use it at the federal level.
Michael Krigsman: We have some questions from Twitter, and one is from Bob Russelman. This is a really interesting one, and he’s asking about the impact of automation and AI on human employment. And I think when we talk about AI, robots, and autonomous systems, this is one of the big questions that come up. So Michael, what are your thoughts about that?
Michael Chui: Sure. So, about a month after we published this Age of Analytics report, and about one month ago, we published another report - and by the way, these are freely available on the web - which really looked at the potential for automation to affect employment and the global workforce.
So, a couple of things: One of the things that we did in this research was to look at not only every occupation, because we think it's quite rare that, in fact, you'll be able to remove someone out of a job and put an AI or robot in there that will do everything that they did. They actually conduct a number of different activities in any job. So, we looked at things at the level of individual activities, and scored them against eighteen different capabilities that could potentially be automated - everything from fine motor skills, navigating the physical world, cognitive tests such as problem-solving, sensory activities, and even understanding and producing natural language. One of the highlight findings is that about 50% of all the activities we pay people to do in the global workforce could potentially be automated by adapting currently-demonstrated technologies; which sounds scary, but … Wow! Fifty percent of the things that we pay people to do. But that's not going to happen overnight.
And again, part of our analysis was understanding what those timeframes might be. Now, we can’t predict the future, so we developed some scenarios with really wide bands around them. When you think about the requirements, I said theoretically, 50% of these activities could be automated. Really, it takes time to integrate those capabilities technologically and create individual solutions. And then beyond that, you have to create a business case, because what I didn’t say was this would cost less than it does for a person to do it. So again, compare the cost to develop and deploy these technologies against the cost of using people for doing the same things.
And then finally, the natural curve of adoption of any technology, which often takes eight to twenty-eight years after the time that something's commercially available to the time it reaches a plateau in its eventual full adoption, then it might take something like forty years or ten presidential terms. At least, that's the middle point of all scenarios that we modeled out before 50% of current activities are even automated.
What’s interesting is that level of change in what people do is not unprecedented. If you look back at 1900, about 40% of the US workforce was about [...] and agriculture, and about seventy years later, about 2% was. So what that actually says to us is that in fact, we need to find new things for people to do, as automation comes into play, so that people are complements of the work that machines are doing. And in fact, we need that quite badly because of aging. We need everybody working plus the robots working to have the economic growth that we need.
It's been done before. I'm a sunny Californian, so I'm hoping this can be done, but it will require real effort to make sure we actually find new activities for people to do and find ways to make sure they get paid to do those new activities as machines work alongside human beings.
Michael Krigsman: So very clearly, then, this technology has the potential to drive a major social upheaval! Michael, that’s essentially the implication of what you’re saying!
Michael Chui: Yeah. I think the question is what word do you want to use? I think “shift” is a different word than “upheaval”, and a different word than “disruption.” But, what we are saying is that all of us, because again, it’s not 50% of jobs, it’s nearly 100% of jobs will have a significant percentage of their activities that will change. How can we all have the flexibility, have the training, have the retraining, so that we’re enabled to do new things as we help use machines to improve our productivity?
David Bray: I would like to add to what Michael is saying because I agree. It really is about augmenting human capabilities as opposed to replacing human capabilities. We almost should be talking instead about artificial intelligence, we should be talking about augmented intelligence. And as we talked about earlier, what machine learning and AI is really good at are things that are pattern recognition and repetitive in nature.
So in some respects, I don't know what we humans want to do to things that are repetitive, rote, that are the same thing over and over again for hours at the end of the day. What this is really doing is freeing us up to focus on those jobs that are going to be nonroutine, where there is no pattern that is present; or even, in fact, where the machine tips and cues us and say, "I've identified something that fits outside of the pattern. You should pay attention to it. I don't know why it's happening, but that's going to require a human to take a look at it." It's almost freeing us up to focus on those things that require more creativity.
Now, that said, it does require us to ask interesting questions, which is 1) What skills should we be training; not just current students in school, so they can be ready for this future of working together in augmented capabilities, but also retraining these same workers so that they can be ready for this future that is not necessarily going to be rote and repetitive work for them, but instead is going to be about, “What is the non-routine work? What is the diagnostic work when a machine tips and cues you to pay attention to it?” We really do need to look at what the future is of pairing humans plus machine, working together, and what does that look like, as well as what new patterns of work will emerge as a result.
Michael Krigsman: And what about the ethical issues of this? It's so fascinating to me because we've got essentially a technology, or set of technologies and techniques that very quickly have cultural, social, and educational implications; and therefore, that immediately takes us down the ethical pathway. So, what about that?
David Bray: So …
Michael Chui: [...]
David Bray: Go ahead, Michael.
Michael Chui: Go ahead, David!
Michael Krigsman: [Laughter]
David Bray: You first, Michael!
Michael Krigsman: [Laughter]
Michael Chui: [Laughter] A couple of things. You know, let me just build on something that David said before in terms of the need for augmentation. You know, one ethical issue to bring to bear is what is it that we’ll need to make sure that the next generation actually has better lives than this generation? For the past fifty years in the biggest economies in the world, about half of the economic growth we’ve seen has come about because of increases in employment, about half of it because of increases in productivity; the ability to use machines and other management innovations to do more with fewer hours.
In the next fifty years, we’re basically going to lose half of our sources of economic growth. Why? Because countries are aging. The US is aging. China’s workforce is actually decreasing in size, and that’s a billion-and-a-half people. In Japan, it’s already happening. And so, unless we have everybody working, plus the robots working, we simply won’t have enough economic growth for the next generation to have better lives than we do. And so, that’s an ethical question already. It actually suggests we need to accelerate the use of automation. But that means that I think, Michael, [it’s time to] to get to your question. As David mentioned before, we have had, and this is true not only for AI but all technology, we have had a lot of our values and the technologies that we developed.
So you know lots of people talk about self-driving cars and this trolley problem. You know, if a car turns one way, if it turns the other way, it kills the people in the car: What should be done? That's a particularly stark and interesting philosophical discussion, but long before we start to need to worry about those in a really deep way, because to a large extent, the cars are not automating the ability in practice to do philosophy. They're incorporating algorithms about what they're seeing on the road.
I think more importantly, as many of these technologies; particularly machine learning, which is more about training computers rather than programming them; understand what it is that data you have in your training set was perhaps the most important that you had. As David said, sometimes that training set is biased in terms of the data that you selected. And that’s where this idea of not just using data, and using analytics, but using it well, is what’s most important. And I come back to this thing about: It’s not just the data analytics, it’s about the people who use it.
And that's a lot of what goes into being a good data scientist. How do you make sure that you understand, you know the provenance of the data, the biases that come about because you collected data? One of the most famous ones that a lot of us who spent time in data talk about is the ability to use a mobile device and in Boston, in order to determine where there are potholes. People drive around and the accelerometer notices a bump, and it says, "Oh, there might be a pothole there." And one of the things that, at the time was true, was there's a bit of a bias in that sample set based on who had smartphones at that time. So again, one needs to understand that biases come about, it's really an ethical issue about what training set you're using in order to train a machine learning algorithm.
Michael Krigsman: We have a couple of people on Twitter who have asked the same question or made the same comment. And I want to remind folks who are watching on Facebook, if you want to be part of the discussion right now, hop over to Twitter using the hashtag #cxotalk. You can watch on Facebook and chat over on Twitter.
So, Neil Raden and Bob Russelman have both raised the comment that in this new world of job training, what kind of skills are going to be needed for people to be trained and to adapt?
David Bray: So, I will actually pull into that question what Michael just said about biases. I think it is being aware of both your biases and other people's biases, and how that impacts what the machine does. I think it's something that if you're lucky, maybe you pick up either from your own childhood upbringing or from your schooling that I don't think we currently have significant forces focused on. I don't even know what the subject would be other than critical awareness of being aware of your biases and being aware of the biases of others, and how that impacts outcomes involving a machine and involving an organization. And so I think that's a new thing that doesn't exist, and in some respects, a machine can actually reveal to us.
I also think it's going to be about cognitive offloading of certain things, and being able to turn off the day. I can easily see someone getting so wrapped up in the fact that the machine doesn't have to sleep, the machine doesn't have to eat, and they end up fourteen hours later still involved working with a machine and not turning things off. And so you're beginning to see that already where people are saying, "You know, after about nine o'clock, ten o'clock at night if you email me, I'm not going to respond. I'll pick it up the next morning." I think being able to cognitively offload some of your work and recognize that a machine's going to keep on working in the background isokay. But, you, as a human, need to take care of yourself. That's also a skill. It's almost like how we deal with physical education for kids. We may need to equally do some version of cognitive relaxation awareness as to when do you turn off your device, and that you're not 24/7.
Michael Krigsman: A lot of social questions here. We’ve got only about ten minutes left, and one of the topics that I really wanted to talk about is: Michael speaks about the concept of “radical personalization.” I think that’s very important. Michael, would you tell us about that?
Michael Chui: Well, I think one of the things that we've often discovered when looking at data and analytics: Those of us who like data … You know, we look at averages and means in particular, and what we found is oftentimes the averages hide some of the most interesting insights. And so, being able to understand distributions has always been important when it comes to data; to use a marketing term, this idea of "segmentation." In fact, not every customer wants the same thing. Not every citizen wants the same thing. Not every citizen is going to benefit from the same sort of intervention, etc. And that’s one of the things we’ve known for many, many years.
But now that we have the technological capability to not only look at three customer segments based on demographics, or ten behavioral segments, really being able to help an individual based on what their needs are. You know, from a healthcare perspective: really understand, for example, their genetic makeup and then be able to customize something for an individual, a “segment of one” as the people in marketing say. I think that’s a capability which is now coming to the fore. One of the things that we know is that just thinking about people as individuals is something we naturally do as human beings, but being able to have our machines be able to do that as well is a lot of value to be created.
It does bring to mind, again, coming back to your question about ethics and values, how you ought to deal with the privacy question, because when you have enough information to be able to customize a service, a product, or a person, that means you do have some pretty interesting information about an individual. And so, you have some questions about how you want to handle that. But as soon as you are able to understand and handle that; provide that individual citizen, or customer, or employee with the understanding of why their data is being used and how; then we can start to provide, as you described, a radical personalization. It is one of the things that we described in our report from December as being a potentially disruptive force. Because, many organizations really are set up to deal with groups, as opposed to individuals. And, when a competing organization comes about and say, “Well, I can provide you with exactly what you need in a very customized way,” that can really change the game.
David Bray: And I think that is going to be a fascinating area for the public sector to try and wrestle with, especially in republics and representative democracies. Historically, the public sector has provided the same service without any personalization to everybody, because we're trying to be equitable. And, we don't want people to say, "Well, they got preferential treatment or they got something special." But I think as consumers and citizens alike become almost expecting that they're going to get personalization from the private sector, and then they're going to look at the public sector and say, "Why aren't you treating me like an individual?", that is going to be a real thorny issue of how do we allow the public sector to do personalization of the services to you, but still, have checks and balances to make sure nobody is getting preferential treatment or biased treatment?
Or, it may very well be that people don't want to reveal the information necessary to give the personalization. And so, that's where actually I think for public service, and it may apply to other organizations as well, we almost need to say the Golden Rule of, "Do unto others as you would have do unto you," and tweak it slightly for what Michael was saying to be, "Do unto others as they will commit, and maybe even like you to do unto them." And so, we maybe have to figure out how can individuals in the public express what they want to be permitted done to their data to a government, what they would like to have done with it, and recognize that's going to have huge variability across nations and across the world.
Michael Krigsman: We have got a bunch of questions coming in from Twitter. And, we don’t have that much time left, but here’s an interesting one from Chad Barbier, and he asks: What applications are you finding that automation is working well for today? Anybody?
Michael Chui: I'm going to mention a couple of things. Some of the types of activities that we found have the greatest automation potential are physical activities in predictable environments. And so, a classic case of that is an assembly line. So, we're starting to see a lot of robotics being used in those types of situations. What's interesting is that robotics decrease the cost; we're starting to see them used in services as well. For instance, at home, I have a robotic vacuum. Some people say that until we figure out the problem, we call it a robot; afterward, we call it a "dishwasher." And so, I think on the physical side, that's happening.
On the more cognitive side are two other types of activities: collecting data and analyzing data. And many times, I think people who are watching or listening will recognize this: How many times are there systems where I have to look something up on this system, and type it into a computer, or cut and paste, or copy and paste, etc.? There are a set of technologies described as robotic process automation. They’re not robots, they’re software robots, but they automate some of these processes, which are, as David says, are these boring things where I’m just taking something from this application, copying it, and pasting it into this one, and all those really rote, simple, and super annoying things. We’re seeing more and more organizations try to deploy those types of software robots to take away that really annoying work from human beings.
Michael Krigsman: And David, your thoughts on what is working well today, in terms of automation.
David Bray: So, there actually was a competition about two years ago on Craigle to see if anyone could write an algorithm that could grade a purse for a third-grade teacher; so, find the same sentence mistakes and grammar mistakes. And for about sixty-thousand dollars, someone actually wrote an algorithm that succeeded in doing that. And so, amplifying what Michael was saying, I think it is … My interests, particularly because I'm in public service, are what are those things that we can do to remove the rote, repetitive work from individuals so they can focus on unique problem-solving things they need to do. So, I think it is making sense of large amounts of data to find errors, to correct things, to give recommendations back, and then to tip and cue a human to pay attention. Those are the things right now that I think are working today.
I think the challenge is there are a lot of cases [where] those systems that can make sense of patterns, and cap tip and cue humans, don’t have access to a sufficient amount of data on things that are useful to the public - whether it’s because we need to make sure we protect privacy, or because that data’s right now is not in a format that can actually be used by the machine, [etc.] I think we need to have better conversations about what are the top maybe three challenges we want to solve as a nation, and then identification of what data, as well as algorithms, we can bring to bear. But that technology exists today to find interesting patterns, to find things that are missing, and to make corrections.
Michael Krigsman: We’ve got just about five minutes left, and Michael, would you share a distilled summary with us of your thoughts about where this is going in the near-term, and practical advice that you have for managers and business leaders who are looking at this changing landscape and feeling a little bit confused about what to do?
Michael Chui: A couple of quick things. You know, one is we talked about data a lot, and I think one of the things that we found, and my colleagues who are helping various organizations around the world find, is that there is usually value just sitting on the table; because in most cases, organizations have access to a lot of data, whether it’s data within organizations or external or open data. And, a very small percentage of the value gets captured from that data that is already sitting there. So, Number One: Figure out what you can do with the stuff that you already have access to.
And then, the second thing which is actually the harder thing: which is that because data analytics, AI, and machine learning can actually add value to almost any process, the hardest thing oftentimes for an organization is to figure out what it should do first. And, that really just requires you to map out where you can do things, and then prioritize the things that create the most value, and you can capture most easily.
And finally, the last thing I’d say is you’ve got to solve the technical problems, but the hardest problems that we’ve talked about several times are how you move an organization. And, that requires just leadership, and so working on the leadership side to move an organization to use these technologies well is what’s important.
Michael Krigsman: And David Bray, your thoughts on how you move an organization, as Michael was just saying, to be able to take advantage of these technologies in the right way?
David Bray: So, sort of looping back how ten years ago, Michael and I were researching distributive problem-solving networks, I think you need to recognize that no one person is going to know 1) all the data that is of value within the organization, and no one person's going to know the processes that can best lead themselves to being adaptive and improved. So, almost, in some respects, you want to crowdsource it within your organization, and you want to champion saying if anyone can come to me with an interesting pitch on the inside that says, "Look, if we brought this data, and this data together, we'd have these insights. And then, we could tackle this process, and you almost treat it like an internal venture capitalist.
That shifts the role of the CIO from being responsible, for being top-down, and having to supposedly know everything in a rapidly changing world, to being almost a human flight jacket and champion of anyone who can bring interesting data to bear, that can inform how the organization can do better and improve those processes. And I think that’s required because this is changing so quickly, and at the end of the day, we are changing what people are doing. You are changing how they work, and they’re going to feel threatened if they’re not bought into, “I’m okay with changing this process because I see the better outcome that will come as a result.”
And so, I think that’s almost an imperative for CIOs to really work closely with their Chief Executive Officers and say, “What I will do is, I will effectively serve as an internal venture capitalist on the inside, for how we bring data, to bring process improvements and organizational performance improvements - and work it across the entire organization as a whole.
Michael Krigsman: Well clearly, these new technologies, data automation, AI, and machine learning, have the dual component of the technology and then the organizational implications. And while that’s true of any technology that hits the enterprise, it seems the potential implications seem even greater in this case.
You have been watching Episode #219 of CxOTalk. We’ve been speaking with David Bray, who is the CIO of the Federal Communications Commission, and Michael Chui, who is a partner at McKinsey with the McKinsey Global Institute. Gentlemen, thank you so much for taking the time today!
Michael Chui: Thank you, Michael!
David Bray: Thank you, Michael!
Michael Krigsman: It has been a great discussion and I invite everybody to come back next week because we'll be doing it again with another great show. Thanks for watching. Bye-bye!
Published Date: Feb 17, 2017
Author: Michael Krigsman
Episode ID: 418