Artificial Intelligence and Privacy Engineering

AI, machine learning, and predictive analytics rely on massive data sets. While holding the potential for great benefit to society, this explosion of data collection creates privacy and security risks for individuals. In this episode, one of the world's foremost privacy engineers explores the broad privacy implications of data and artificial intelligence.

47:37

Apr 28, 2017
6,950 Views

AI, machine learning, and predictive analytics rely on massive data sets. While holding the potential for great benefit to society, this explosion of data collection creates privacy and security risks for individuals. In this episode, one of the world's foremost privacy engineers explores the broad privacy implications of data and artificial intelligence. 

Michelle Finneran Dennedy currently serves as VP and Chief Privacy Officer at Cisco. She is responsible for the development and implementation of the organization's data privacy policies and practices, working across business groups to drive data privacy excellence across the security continuum. Before joining the Cisco, Michelle founded The iDennedy Project, a public service organization to address privacy needs in sensitive populations, such as children and the elderly, and emerging technology paradigms. Michelle is also a founder and editor in chief of a new media site—TheIdentityProject.com—that was started as an advocacy and education site, currently focused on the growing crime of Child ID theft. She is the author of The Privacy Engineer's Manifesto.

Dr. David A. Bray began work in public service at age 15, later serving in the private sector before returning as IT Chief for the CDC’s Bioterrorism Preparedness and Response Program during 9/11; volunteering to deploy to Afghanistan to “think differently” on military and humanitarian issues; and serving as a Senior Executive advocating for increased information interoperability, cybersecurity, and civil liberty protections. He completed a Ph.D. in from Emory University’s business school and two post-docs at MIT and Harvard. He serves as a Visiting Executive In-Residence at Harvard University, a member of the Council on Foreign Relations, and a Visiting Associate at the University of Oxford. He has received both the Arthur S, Flemming Award and Roger W. Jones Award for Executive Leadership. In 2016, Business Insider named him one of the top “24 Americans Who Are Changing the World”. He is currently the Chief Information Officer at the US Federal Communications Commission.

 

Transcript

Michael Krigsman: Welcome to Episode #229 of CxOTalk. As always, we have an amazing show. I’m Michael Krigsman. I am an industry analyst and the host of CxOTalk. Before we start, I want to say “thank you” to Livestream for just their great, great, great support of CxOTalk. And if you go to Livestream.com/CxOTalk, they will give you a discount.

So, today we are talking about artificial intelligence; AI; and privacy engineering. And, we have two amazing people. Let me start by introducing Michelle Dennedy, who is the Chief Privacy Officer of Cisco Systems. Hey, Michelle! How are you? This is your second time back on CxOTalk.

Michelle Dennedy: It is! You can’t get rid of me, Michael!

Michael Krigsman: Well, I consider that to be a good thing. So, Michelle, very quickly, tell us about Cisco. I think we know who Cisco is, but tell us about what you do…

Michelle Dennedy: I hope so!

Michael Krigsman: … at Cisco.

Michelle Dennedy: Umm, so Cisco… We are now no longer a startup! Sometimes, I have to remind myself of that because there’s so much innovation going on here, but we are the heart and soul of the network globally. We support … I think we’re up to 130 nations [who] rely on the technology that we sever. And most importantly, for tech today, I report to a fellow named John Stewart who is our Chief Trust Officer. And so, for us, trust, privacy engineering, data protection, security, security engineering, and advanced research all live in one place in operations. So, we are as much of evangelist, forward-thinking innovators as we are operational staff really making this work for ourselves and our customers. So, it’s a fun place to be; kind of at the crux of Cisco’s network, and the gateway, really, to all of the networking that goes on.

Michael Krigsman: Wow! So, we’re going to have to definitely be talking about trust during this conversation. And, our second guest is somebody who regular viewers of CxOTalk are familiar with, because he’s been here a number of times; and that is David Bray. David is an Eisenhower Fellow, as well as the Chief Information Officer for the Federal Communications Commission. Hey, David! Welcome back to CxOTalk!

David Bray: Thanks for having me, Michael! And I guess you really can’t get rid of me since I keep on coming back. So thanks for …

Michelle Dennedy: [Laughter]

Michael Krigsman: And, again, lucky me!

So, I think, to begin, the title of this show is “AI and Privacy Engineering.” And, maybe we should begin by talking about what is privacy engineering? And then, let’s talk about what we mean by "AI." So, Michelle, what is privacy engineering?

Michelle Dennedy: Excellent. So, privacy, by design, is a policy concept that was first introduced at large… It was hanging around for ten years in the networks and coming out of Ontario, Canada with a woman named Ann Cavoukian, who was the commissioner at the time of Ontario. But in 2010, we introduced the concept at the Data Commissioner’s Conference in Jerusalem, and it was adopted by over 120 different countries to say that privacy should be something that is contemplated in the build; in the design; and that means not just the technical tools they can buy and consume, [but] how you operationalize; how you run your business; how you organize around your business.

And, getting down to business on my side of the world, privacy engineering is really using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the really most basic sense of engineering to say, “What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to actually build in and solve for privacy challenges?”

And I'll double-click on the word "privacy." It does not mean having clean underpants, already using encryption. Privacy, in the functional sense, is the authorized processing of personally-identifiable data using fair, moral, legal, and ethical standards. So, we really bring down each one of those things and say, "What are the functionalized tools that we can use to promote that whole panoply and complicated movement of personally-identifiable information across networks with all of these other factors built in?" It's not something that you're going to paste onto the end easily. You're certainly not going to disclaim it away with a little notice at the end saying, "Hey! By the way, I'm taking all your data! Cheerio!" Instead, you're really going to build it into each layer and fabric of the network, and that's a big part of why I came to Cisco a couple of years ago. [It's] if I can change the fabric down here, and our teams can actually build this in and make it as routinized and invisible, then the rest of the world can work on the more nuanced layers that are also difficult and challenging.

Michael Krigsman: Okay. So, clearly, there's this key element of trust as you mentioned earlier. And David Bray, when we think about AI in this context of privacy, and of trust, where do they intersect? Where does privacy intersect with AI?

David Bray: So, I loved what Michelle said about this is actually something that's not just putting on encryption, which I think a lot of people will think is a panacea and it's not going to solve everything. It's worth going back to roots of when did the act come about in the United States. It came about when we started doing these things called […] data processing, or we were able to start correlating information, and the […] came something could be made of these correlations given your consent, too. And so, what Michelle said about building beyond and thinking about networks: That really gets to where we're at today, now in 2017, which is it's not just about individual machines making correlations; it's about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with […] personally identifiable information.

And so, for AI, if you think about it, it really is just sort of the next layer of that. We've gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?

One of the things I learned when I was in Australia as well as in Taiwan as an Eisenhower Fellow; it's a question about, "What can we do to separate this setting of our privacy permissions and what we want to be done with our data, from where the data is actually stored?” Because right now, we have this more simplistic model of, “We co-locate on the same platform,” and then maybe you get an end-user agreement that’s thirty or forty pages long, and you don’t read it. Either accept or you don’t accept; if you don’t accept, you won’t get the service, and there’s no opportunity to actually say, “I’m willing to have it used in this context, but not these contexts.” And I think that means Ai is going to really raise questions about the context of when we actually need to start using these data streams.

Michael Krigsman: So, Michelle, thoughts on this notion of context? Where does that come into play?

Michelle Dennedy: For me, it’s everything. We wrote a book a couple years ago called “The Privacy Engineer’s Manifesto,” and in the manifesto, the techniques that we used are based on really foundational computer science. Before we called it “computer science” we used to call it “statistics and math.” But even thinking about geometric proof, nothing happens without context. And so, the thought that you have one tool that is appropriate for everything has simply never worked in engineering. You wouldn't build a bridge with just nails and not use hammers. You wouldn't think about putting something in the jungle that was built the same way as a structure that you would build in Arizona.

So, thinking about use-cases and contexts with human data, and creating human experiences, is everything. And it makes a lot of sense. If you think about how we’re regulated primarily in the U.S., we’ll leave the bankers off for a moment because they’re different agencies, but the Federal Communications Commission, the Federal Trade Commission; so, we’re thinking about commercial interests; we’re thinking about communication. And communication is wildly imperfect why? Because it’s humans doing all the communicating!

So, any time you talk about something that is as human and humane as processing information that impacts the lives and cultures and commerce of people, you’re going to have to really over-rotate on context. That doesn’t mean everyone gets a specialty thing, but it doesn’t mean that everyone gets a car in any color that they want so long as it’s black.

David Bray: And I want to amplify what Michelle is saying. One of the things, when I arrived at the FCC in late 2013, we were paying for people to volunteer what their broadband speeds were in certain, select areas because we wanted to see that they were getting the broadband speed that they were promised. And that cost the government money and it took a lot of work, and so we effectively wanted to roll up an app that could allow people to crowdsource and if they wanted to, see what their score was and share it voluntarily with the FCC. Recognizing that if I stood up and said, “Hi! I’m with the U.S. government! Would you like to have an app […] for your broadband connection?” Maybe not that successful.

But using the principles that you said about privacy engineering and privacy design, one, we made the app open source so people could look at the code. Two, we made it so that actually, when we designed the code, it didn’t capture your IP address, and it didn’t know who you were in a five-mile-radius. So it gave some fuzziness to your actual, specific location, but it was still good enough for informing whether or not broadband speed is as desired.

And once we did that; also, our terms and conditions were only two pages long; which, again, we sort of dropped the gauntlet and said, “When was the last time you agreed to anything on the internet that was only two pages long?” Rolling that out, as a result, ended up being the fourth most-downloaded app behind Google Chrome because there were people that took a look at the code and said, “Yea, verily, they have privacy by design.”

And so, I think that this principle of privacy by design is making the recognition that one, it’s not just encryption but then two, it’s not just the legalese. It really is if you can show something that gives people trust; that what you're doing with their data is explicitly what they have given consent to; when they have chosen to have permitted with their data ... That, to me, is what's really needed for AI [which] is, can we do that same thing which actually shows you what's being done with your data, and gives you an opportunity to weigh in on whether or not you want it or not?

Michael Krigsman: Can I ask either of you a thought here? AI is really, at its heart, pattern matching. And, pattern matching is certainly not new, although it seems that we have built a cult around AI as if it is something new.

Michelle Dennedy: [Laughter] But it’s magic.

David Bray: Sure! It is what cloud was five years ago. [Laughter]

Michelle Dennedy: Yeah. [Laughter]

Michael Krigsman: So, since pattern matching is not new, and therefore, the foundations of AI are not new, and privacy is certainly not new, why should we even care about this topic in such an acute way? And also, the ethical implications. Why should we care about this?

Michelle Dennedy: So, if you want to start, David? I think we both have a lot to say, here. [Laughter]

David Bray: I will defer to you. Do you want me to? It’s up to you.

Michelle Dennedy: Okay. So I will dig in. First of all, I have to just underline anyone watching the playback, go back and watch Dr. Bray talking about “Yea, verily.” Okay? How many people? That’s like Robin Hood-speak in the government! So, yea, verily we see! So, I had to put a little purple underline about that.

But as far as why do we care, and why do we care now? And I’ve heard this in my entire career in privacy for two decades, either it hasn’t existed, or it’s so freaking hard that only a genius could get it. It’s kind of like, which null set is there.

AI, first of all, I think, taking a step back for artificial intelligence. You can tell that the way my mind is kind of matrixed. What are we talking about? When we say "artificial intelligence," are we talking about Skynet, which is like super-secret magic or are we talking about a really huge amount of dumb machines that are gathering stuff from either observations, sensors, or the inputs of human humans? And so, the quality degrades over time.

And then, you’re coming up with analytics. Are we back to statistics saying what is the trend and that can be artificial intelligence? If you think about weather mapping and how we decide which planes get to take off when? There’s a lot of artificial intelligence and analytics that come from sensors that talk about what’s the moisture in the air, wind pressure, what the weight on the plane is, how old is the plane; blah, blah, blah, blah, blah. All that data coming together so that someone can make a decision based on observations that we have not directly made in the moment, that can be a type of artificial intelligence.

And, it has impacted some lives, you know? Whether that plane makes it is a really important thing to me, particularly if I’m sitting on there or if someone I like is sitting on there. I may have a list of otherwise, but that’s an ethical concern.

David Bray: [Laughter]

Michelle Dennedy: I think the other thing is why do we care when we care now is the first 30-35 years of compute has been, "Can we make it work?" And I think the next 30-50 years is, "Can we make it work for us?" Are we the victims of the only platform [being] available is X, and therefore we’re using that and calling that trust? Or, do we now legitimately have a choice of the quality, the kind, the testing, of data and how it’s processed and when and where it’s processed?

And I think that we are on the cusp of saying “yes” to that answer. There’s enough information and capability in the networks that getting broadband out so that information can get to schools in rural areas, or teachers can learn; there’s an amazing program in Chengdu that Cisco actually helped after a terrible earthquake, and now we have the best teachers available. They broadcast their lesson plans the night before, and they broadcast them to over a million children in the outreaching provinces of China. That’s the power of the network.

And artificial intelligence can be a very big component of that. And obviously, there are other issues that we’re going to get into of, you know, there are concerns here. And we have to think about both the quantity and the quality of those concerns.

Michael Krigsman: David, we have an interesting question from Twitter. Scott Weitzman is asking a continuation of this thread. Scott Weitzman is asking, “With AI, is there a need for a new level of information security? And, should AI itself be part of this security?”

David Bray: So, I’ll give the simple answer which is “Yes and yes.” And now I’ll go beyond that.

So, shifting back to first what Michelle said, I think it is great to actually unpack that AI is many different things. It's not a monolithic thing, and it's worth deciding are we talking about simply machine learning at speed? Are we talking about neural networks? But that said if we don't spend the time unpacking all of that ... I think why this matters now is five years ago, ten years ago, fifteen years ago, the sheer amount of data that was available to you through the Internet of Everything; devices that are now being streamed to the internet; is nowhere near to what it is right now, and let alone what it will be in five years.

I mean, if we're right now at about 20 billion networked devices on the face of the planet relative to 7.3 billion human beings, estimates are at between 75 and 300 billion devices in less than five years. And so, I think we're beginning to have these heightened concerns about ethics and the security of data. To Scott's question: because it's just simply we are instrumenting ourselves, we are instrumenting our cars, our bodies, our homes, and this raises huge amounts of questions about what the machines might make of this data stream. It's also just the sheer processing capability. I mean, the ability to do petaflops and now exaflops and beyond, I mean, that was just not present ten years ago.

So, with that said, the question of security. I would modify Scott’s questions slightly and say it’s both security but also we need maybe a new word. I heard in Scandinavia they talk about integrity and being integral. It’s really about the integrity of that data: Have you given consent to having that be used for that purpose? So I think yes, AI could definitely play a role not just in making sense of is this data being securely processed? Because the whole challenge is right now, for most of the processing we have to decrypt it at some point to start to make sense of it and re-encrypt it again. But also, is it being treated with integrity and integral to the individual? Has the individual given consent?

And so, one of the things that I’ve heard is it was actually raised when I was in conversations in Taiwan. I want to raise the question of, “Well, couldn’t we simply have an open-source AI, where we give our permission and our consent to the AI to have our data be used for certain purposes?” For example, it might say, “Okay, well I understand you have a dataset served with this platform, this other platform over here, and this platform over here. Are you willing to actually have that data be brought together to improve your housekeeping?” And you might say “no.” He says, “Okay. But would you be willing to do it if your heart rate drops below a certain level and you’re in a car accident?” And you might say “yes.”

And so, the only way I think we could ever possibly do context is not going down a series of checklists and trying to check all possible scenarios. It really is going to have to be a machine that is actually able to talk to us and have conversations about what we do and do now want to have done with our data.

Michael Krigsman: So, the issue, then, is this combination of data plus compute power. And you add those together with the, can we say, advanced new pattern matching capabilities and techniques, and that’s why we have the privacy; the new set of privacy challenges? Is that a fair statement?

David Bray: Very much.

Michelle Dennedy: […]

David Bray: I would say IoT plus interconnectivity, plus machine processing; this is the storm ahead. Go ahead, Michelle. Sorry.

Michelle Dennedy: Yeah. Now that I know, I’ll just add one more element, which is the carbon-based unit.

David Bray: Yes.

Michelle Dennedy: Just so you know. [Laughter] We’re seeing cultures come together thanks to the network like never before. And sometimes, that’s wonderful. My daughter has a best friend in China, and someone else that she’s never met in Amsterdam, and that’s incredible and supportive and wonderful, and we all learn great things.

However, we are also exposed on a daily basis to the trauma of the world. Never before have we been able to witness mass problems, you know? When you turn on the BBC World, I didn't grow up with the BBC World. There were only two networks that were available to me. And now, we are bombarded with information, for better or worse […]. And we're making decisions, and we're seeing the world kind of recycle old ideas, and hopefully, process them in other ways.

So, when we’re making automated decisions, it’s absolutely critical that we understand that we are documenting, in some way, what those decisions are and what context. So it’s like, context on top of context, on top of context. And we understand that sometimes, as David was saying, there are certain periods where it’s like, “Do I want everything on this young, healthy person? Do I want every bit of my health and aspects of my health monitored? Maybe not?” Is that a decision we’re going to make in loco parentis. I threw a little Latin in there for you, David!

David Bray: [Laughter]

Michelle Dennedy: Yup. I’m here for you!

Are we going to make that, as a society, to say, "Listen. If only I had put sunscreen on my water stream when I was younger, I wouldn't be, like, holding my face up with bandages at this point in my middle age!" Or, are we simply going to let people choose and educate them enough to make good choices? We don't have the answers yet, and that's why I think it's interesting and exciting and innovative to try to build out controls and ethical tools as we're building this brave new world.

David Bray: And I want to add to what Michelle said about the importance of people, because just as we know that human beings do great things, mundane things like cat videos, as well as not-so-great things, too; so, too, will the machines. Machines […] are an amplification of what we share and send with it. And without naming the name of the specific app, recently there was a story this week, in which an app was bringing in people's photos and faces and it allowed you to make them “animé’d." The challenge is when you click the beautification button, unfortunately, the app's conclusion was beauty was lighter skin, which if any of us had ever […] that's atrocious towards…

Michelle Dennedy: Yeah.

David Bray: ... the human … That’s what the machine had been taught to think what’s beautiful. So, we need to recognize there is both the importance of privacy and engineering privacy by design, but also some sort of check to make sure that the machine is not going down a really bad path that is either incorrect socially, racially, whatever it might be. And we need to be aware of that.

Plus, just think about it. I mean, ten years ago, most of us did not have lots of our health data online. And so, we were not targets for having that data be stolen. But if you look at the recent cyber-trends, where the real interesting attacks are trying to go after is actually after healthcare data because that is a huge value, unfortunately, in the dark web. And so, there are also going to be questions of even if you do share it and you give permission, are we now creating a new target or a new risk attack surface that we have not thought about as a society?

Michael Krigsman: Michelle, I have a question for you picking up on David’s comment just now; this issue of bias. So, with machine learning, we give the system datasets, and if those datasets have some inherent bias, then the AI system will pick up that same bias. And so, are there privacy engineering considerations that come into play with the respect to the inherent potential for inherent bias in a given system?

Michelle Dennedy: Yeah. And this is something I’ve been really thinking hard about lately and talking [about] to much smarter people than myself, which isn't hard sadly. But, there's a woman; gosh, I'm going to forget her name now; Elish; she wrote a paper called "Moral Crumple Zones," and I just love even the visual of it. If you think about cars and what we know about humans driving cars, they smash into each other in certain kind of known ways. And the way that we've gotten better and lowered fatalities of known car crashes is we actually use physics and geometry to design a cavity in various parts of the car where there’s nothing there that’s going to explode or catch fire, etc. as an impact crumple zone. So all the force and the energy goes away from the passenger and into the physical crumple zone of the car.

Now, the analogy falls apart fairly readily. So, don’t throw your Twitter knives at me. But, I think Madeleine Clair Elish is her name – I'm sorry, Madeleine, I think you're brilliant. She's defending her Ph.D. this month, so let's hold hands out for Madeleine – but she's really working on exactly what we're talking about. We don't know when it's unconscious or unintentional bias because it's unconscious or unintentional bias. But, what we can do is design in ethical crumple zones, where we're having things like testing for feeding, just like we do with sandboxing or we do with dummy data before we go live in other types of IT systems. We can decide to use AI technology and add in known issues for retraining that database.

I’ll give you Watson as an example. And Watson isn’t a thing. Watson is a brand, right? So the way that the Watson computer beat Jeopardy contestants is by learning Wikipedia. So, by processing mass quantities of stated data, you know, given whatever levels of authenticity that pattern on. And, it could really simulate a genius person.

What Watson cannot do is selectively forget. So, your brain and your neural network are actually better at forgetting data and ignoring data than it is for processing data. So, we're trying to make our computer simulate a brain, except that brains are actually good at forgetting. AI is not good at that, yet. So, you can put the tax code, which would fill three ballrooms if you print it out on paper. You can feed it into an AI type of dataset, and you can train it in what are the known amounts of money someone should pay in a given context?

What you can't do, and what I think would be fascinating if we did do, is if we could possibly wrangle the data of all the cheaters. What are the most common cheats? How do we cheat? And we know the ones that get caught, but more importantly, how do […] get caught? That's the stuff where I think you need to design in a moral and ethical crumple zone and say, "How do people actively use systems?" The concept of the ghost in the machine; how do machines that are well-trained with data over time experience degradation? Either they're not pulling from datasets because the equipment is simply … You know, they're not reading tape drives anymore, or it's not being fed from fresh data or we're not deleting old data. There are a lot of different techniques here that I think have yet to be deployed at scale that I think we really need to consider before we're overly relying [on AI], without human checks and balances, and processed checks and balances.

Michael Krigsman: So, let me ask David. What can we actually do about this? Because I think it's one thing and relatively easy to talk about it. But, what are the implications for policy? For the private sector? For the people that are building these systems? For the corporations that are using AI tools and collecting these large datasets? And for data scientists? What do we need to do?

David Bray: So, I think it’s going to have to be a staged approach, because exactly as Michelle said; I’m going to throw out also some Greek: “experiments” and “expertise” both come from the Greek word “experior,” meaning “out of danger.” We are in dangerous times, and we’ve got to do some experiments to figure out the expertise to move forward. I would recommend as a starting point: begin to have … You almost need to have the equivalent of a human omnibudsman – a series of people looking at what the machine is doing relative to the data that was fed in.

And you can do this in multiple contexts. Either a, it could just be internal to the company, and it’s just making sure that what the machine is being fed is not leading it to decisions that are atrocious or erroneous. Or, if you really want to gain public trust, share some of the data, and share some of the outcomes but abstract anything that’s associated with any one individual and just say, “These types of people applied for loans. These types of loans were awarded,” so can make sure that the machine is not hinging on some bias that we don’t know about.

Longer-term, though, you’ve got to actually write that omnibudsman. So real quick, what Michelle said, we need to be able to engineer an AI to serve as an omnibudsman for the AI itself. So really, what I’d see is not just AI as just one, monolithic system, it may be one that’s making the decisions, and then another that’s serving as the Jiminy Cricket that says, “This doesn’t make sense. These people are cheating,” and it’s pointing out those flaws in the system as well. So we need the equivalent of a Jiminy Cricket for AI.

Michael Krigsman: Okay. There’s …

Michelle Dennedy: We have range! We’ve had Disney, you know, we’ve got ancient Greek, I like it!

David Bray: You need to bring in Pirates of the Caribbean and we’re set!

Michelle Dennedy: [Laughter] Little extra eyeliner for me. Make me Johnny Depp.

David Bray: Excellent!

Michael Krigsman: Okay. So there we have that word again; that word, “trust.” What does trust keep coming up as a thread throughout this conversation?

Michelle Dennedy: So, I’m going to go all Renee Brown on you. [Laughter]

David Bray: Go for it!

Michelle Dennedy: And if you don’t know who she is, you really should! She’s so awesome! She’s out there rising strong and all sorts of good stuff.

I think at the root of trust, and why we keep rotating back to that word, even in computer science, where typically, a lot of people gravitated to this field because they’re either afraid of humans or just prefer machines. Not that we don’t love them all! I have Sheldon here just, you know … We love you!

David Bray: [Laughter]

Michael Krigsman: [Laughter]

Michelle Dennedy: Umm … [Laughter] But, I think it’s really important to … How do I put it? Just, to build this stuff in is essential and to build it in as coming from the people that are attracted to a field that may not be human, it’s so interesting once you introduce the notion of trust.

I’d break down trust into two factions. One is, I’m on a mountain, and I’m hanging onto a blade of grass, and you’re the dude with the rope. I trust you with my life. Why? Because I trust you more than my ability to fly. You’re kind of the only game in town. Then, there’s the Renee Brown kind of trust, which is I think where we’re heading with these very complex compute systems and the ones that feel simple because they are garage apps, and they're easy to download and consume and throw away. But, they're really adding to the complexity of the information footprint that surrounds you in the network; and that is trust that comes over time. Trust that says, “For better or for worse.” Like, there are certain platforms; why people buy them when they’re 1.0, I cannot comprehend. They will blue-screen you every time. Wait until 2.0, people! We know this! We’ve got 30 years of experience! My trust is, I trust that you’re going to release it too soon. And I trust that some sucker’s going to test it for me and I’m going to get to it when it’s 2.0. And that happens in a lot of technology, and including cars and all sorts of things. So, we’re buying a boat, right? Never buy a new boat.

Umm, the other kind of trust is the one we’re really trying for. You know who I am; you know I'm going to perform for you over time; you know that when I make a mistake; I'm going to admit it, and correct it to the best of my ability. And, I'm going to have something or someone that is either a direct advocate for you like the Chief Privacy Officer, or a proxy, which is going to be working with other people that come in and test our systems. You know, we've got consultancies all the time that test our ethical and our trust frameworks. Are they really too self-centered? Are they only looking at our shareholders? Or, are they also looking at the quality of care that we’re giving to all of our customers and employees?

So those are the two kinds of trust that I think they can be broken down once again as kind of, “I’m like a broken record.” What does it mean? What is the problem we’re trying to solve? How do we break it down into things that we know we know how to do? And let’s look really hard and things we don't know how to solve for, and either retrain for it, have teams around it, have something else as a proxy when you can't solve it completely and perfectly.

Michael Krigsman: David Bray, let me ask you something here relating to this. Yesterday, I was in New York. I was running an event, and one of the participants on stage was a senior person from IT who works at JetBlue. And, you talk about trust, he was explaining how, in his world, there is no room for technology error, and therefore the trust level just needs to be right up at the top. How does this dimension of trust come into play?

David Bray: So, you set me up perfectly, because I wanted to amplify with what Michelle said. When I was doing my Ph.D., I was focusing on how to improve organizational response to disruptive events, both the technology and the humans, because I like both of them and the interesting messiness that occurs when you get technology and humans together because there are all sorts of pathologies that arise. Trust was key and I would define trust based on the academic literature as the willingness to be vulnerable to someone, or something you can't control. And so, the JetBlue example is, yes, when you're on a plane when that wing flies off, or the autopilot starts soaring into the ground; […] you’ve lost trust because you’re now vulnerable to the actions of something that you did not have direct control over.

Now, there are three predecessors that if present, it’s been shown, humans are now willing to trust a person or a thing. If they believe that the individual, or the machine, or the system is benevolent, and so they have a good interest in mind for the person; if it’s competent, so it actually is skilled at what it does; and then finally, that it acts with integrity. It’s not going to do when you’re not looking, something that you’re not expecting, and only behave on its best behavior when you’re looking.

If those three things are present, then you’re willing to be vulnerable, then you’re willing to trust. So I think when I think about AI, how can we begin to [instrumentalize] and show benevolence? How do we reveal that to the public if the public buys into it? Competence is a little bit easier. Integrity is the hard one. It circles back to that conversation where, again, there are experiments in Europe, in Scandinavia in particular, where this idea of how do you show integrity? Because let's think about it. The professions, going back to the 19th Century, doctors and lawyers; the reason the professions are able to credential and self-police themselves, is because they do actually find people that are behaving badly, the un-credentialed, […] I'm going to take their license away. You'll get debarred, or something like that.

What are the equivalent of the professions for AI, which is if the community determines that an AI is not behaving with integrity, we’re going to take your license away or we’re going to debar you. Because I think the public is willing to let the private sector self-police itself insofar as it sees the self-policing as effective. Otherwise, they’re going to be looking for other options.

Michael Krigsman: I love this! And, so I have to ask the brilliant privacy engineer among us.

Michelle Dennedy: Uh-oh!

Michael Krigsman: Michelle, how do we manage this integrity issue? Figured we posed a difficult question to her. [Laughter]

Michelle Dennedy: [Laughter] Yeah! Well, and I love the breakdown is exactly, you know, on-point for how we’ve even organized around this at Cisco with our Chief Trust Officer, and putting us … You know, we have arms and legs in public policy, in the world of legal support, etc. But this is not a compliance function. This is an integrity-building function. Did we have the right people? Are they trained? Are they constantly, actively listening to vulnerable clients? And those clients are not just money-making machines who need networking or clouds, or collab tools. These are people who are serving and creating experiences. These are people who have families, you know? We have 70 thousand Cisco employees, and they have families. And I take every single day that I work here as a fiduciary obligation to make sure that these families are … people are going home to their families with integrity.

And I think part of that is really fascinating in the study of the ethics of … When you think about ethics versus morality, and this, you know, to anybody who’s really a scientist, I apologize for getting your terms about wrong. I think of morality as "killing people, bad." That's a pretty universal one, I think! I hope! Killing people, bad! But, if you get into "What do I own as far as data?" this is where my ancient roots of [patent] litigator come out and say, "I think about personally-identifiable information many ways as similar to other versions of intellectual property.” It’s a story that exists in someone’s mind, in someone’s database, in someone’s diary somewhere; together, the three of us jointly own the truth that we were here today and had this conversation. Everyone who’s following us on Facebook Live or Twitter are part of this conversation.

Each one of those little breadcrumbs is an element of personally-identifiable information. Who owns that? Well, if you’re in the Western world, there’s notions of intellectual property. If you’re in the U.S., we’re having kind of a change, a sea-change of what do we mean by being…. Are we the ethicist and the moral beacon of the world, or are we looking within our territorial borders anymore? If you are in Asia and certain other collective communities, for three thousand years, they look at intellectual property as selfishness. We look at it as stealing. “Why are you takin’ my stuff?” They look at us like “Why would you prevent everyone from benefitting from innovation?”

A society that’s split and you’re still trying to prove integrity, what you have to do is be very open about what is your ethical model. And if the model is you get to control the informational stories that are told about you to the greatest extent possible, without degrading the ability of everyone else to live their life with integrity and self-determination, and for the organization to continue to be around to protect that data, that is once again … This mixed sort of complex use-case-driven model of integrity. But I think part of it is even when we don’t know what the answers are, admitting that, and saying that we have organized; we have invested; we are publicly confused working with external parties that are old customers to continue asking questions on what fields like integrity … And talking about feelings at large.

You know, [at] Cisco, we build that network, remember? We, right now, are the electronic currency, if you will. The current that runs underneath all of human activity these days; it’s a huge responsibility. And as we’re getting smarter in the networks and the demand is for us to curate more and more of that information along its way, we’re also going to be the curators of the world’s currency of data. And so, the notion of integrity, in how you build in every single step of the way, from the policy to the buildout, to the quality models, to the organizational structures that deliver that; all of that matters. And all of that has to be orchestrated together in one, kind of beautiful package […].

Michael Krigsman: We have literally […] four minutes left, and it feels like this has gone by in a snap. So, David Bray, you have this broad overview of AI and technology, and I've even heard you quote from the Federalist Papers. So, what should we do?

David Bray: Well again. We’re going to have to experiment. I don’t think any one person has all the answers. I definitely don’t intend to. That means we need to be listening to all sectors and all members of the public because, as Michelle said, there are huge variances in perspectives both in the United States, but also around the world. I mean, we didn’t even dive into Europe and what’s going to be going on with DDPR, and there are going to be huge questions about how you can even begin to show what an AI is doing with its decision making without showing the data, too.

But I will leave with two, main thoughts. First, as you mentioned, the Federalist Papers, I love to say, "What is government but a great reflection of humanity? If all men were angels, no government would be necessary." Let's just replace the word "government" with "civil society" and "public service," and let's add, instead of "men and women," let's do "men, women, and AI." What is AI but the greatest reflection of humanity? Because it is! It's us being reflected back, and that may or may not be a way to say, "Hold up. Wait. Is this fair? Is this right? Is this not biased, or is this prejudiced?" So, I think AI can begin to be used as a tool to say, "Are you aware of these biases? Are you aware of these concerns that you may not have been aware of otherwise?" There's some good there.

And then two, to sort of take what Michelle said about ... I mean, the world has massive differences in philosophies, and I've tried to figure out … In three thousand years, philosophers still can't agree. But what I would say as to one undercurrent I see is, "Do unto others as they give consent and are willing to be committed to be done unto them." I think that, to me, at the end of the day … So, we have to develop tools that allow them to express their consent, express their permissions, and then have it so it’s not always asking you. Because we can’t answer to these questions every five minutes. But, have it be sort of our own, personal open-source AI bot that is representing what we give consent to, what we give permission to in the world ahead.

Michael Krigsman: And, it looks like, Michelle Dennedy, you’re going to get the last word here.

Michelle Dennedy: Oh, dear!

Michael Krigsman: In about one minute, David said, “Men, women, and AI.” So, please, tell us about that world.

Michelle Dennedy: [Laughter]

Michael Krigsman: And you only have a minute, by the way.

Michelle Dennedy: I only have a minute! Well, so I will end on a high note, hopefully. I will add, and I think they were implied in his list; children, and this new generation coming up. I am the mother of an eleven-year-old, and a fifteen-year-old who think they know everything about the world already! We have our judgment, and we have what we've learned from the last three thousand years of history to impart to our little ones. They are growing up in a very different environment. They are growing up with far more technical might than we ever did before. And, I think if there's one thing I would say to the world, it's "Don't count them out." If you think your kids don't want to curate and decide when they are a certain persona, you're wrong. If you think that your kids aren't interested in how their information is processed and how it's used for and against them, you're wrong! If you think that they aren't out there marching with their crazy, silly signs and doing all the things that we're doing to really express a new type of democracy, you’re out of your mind!

I’m looking at the younger generations to really take a rethink and a reset of what are the specs and requirements for the hard-coded systems, and how do we allow for flexibility over time and maturity. And as we discard some of these tools that we thought were so nifty when they first came out, some of them will be like the fat, jiggling machines of the 1920’s. They’ll just look silly to us.

Michael Krigsman: Okay….

David Bray: […] up a character from a 1999 Super Bowl ad. I think it’s going to be some that are like that.

Michelle Dennedy: Exactly! Exactly. And we’ve got Grumpy Cat.

David Bray: Yes, exactly! [Laughter]

Michelle Dennedy: Like a selfie.

Michael Krigsman: I love that! And, are those the only real-life emojis that you have, Michelle Dennedy?

Michelle Dennedy: No! I've got … This is the Order of the Flying Pig, and I would like to present this to Dr. David Bray. The Flying Pig was years ago; someone told me that we only needed to do privacy for 10% of my time to support my customers, because it was going away. And I said, "Well, when pigs fly, we will do privacy all the time." So, I present you virtually, the Order of the Flying Pig, David, because we're not done doing privacy, ethics, or AI.

David Bray: Very honored to be inducted. Thank you!

Michael Krigsman: And, with that, it is time to conclude, sadly, Episode #229 of CxOTalk.  And, what an amazing conversation! We’ve been speaking with Dr. David Bray, who is an Eisenhower Fellow, and the Chief Information Officer of the Federal Communications Commission; as well as Michelle Dennedy, who is the Chief Privacy Officer of Cisco. And, I will extend an invitation to both of you to do this live. We need to have this conversation live on stage someplace, in front of an audience.

David Bray: With the Flying Pig. As long as we bring the Flying Pig, too, I’m all for it. Thank you, Michael.

Michelle Dennedy: She’s in! [Laughter]

David Bray: [Laughter]

Michael Krigsman: [Laughter] Everybody, thank you so much for watching. Check CxOTalk.com/episodes to see what’s coming next. And, like us on Facebook and of course, subscribe to us on YouTube. Thanks a lot. Take care, everybody. Bye-bye!

Published Date: Apr 28, 2017

Author: Michael Krigsman

Episode ID: 429