CIA privacy expert Jacqueline Acker shares practical insights on implementing ethical AI governance and building effective compliance programs in complex organizations.
AI Governance and Ethics: Personal Perspectives from a CIA Privacy Professional
On CXOTalk episode 863, join a fascinating conversation with Jacqueline Acker, Esq., a privacy professional from the Central Intelligence Agency, as we explore the critical intersection of AI governance, ethics, and national security. This episode offers unique insights into the personal views of how the CIA approaches AI ethics and governance while balancing operational security with transparency.
Drawing from her experience at the CIA, Acker discusses the practical challenges of implementing ethical AI frameworks in high-stakes environments. She shares valuable perspectives on developing robust governance processes, establishing accountability structures, and creating effective compliance programs for AI systems.
Key topics include:
- Building ethical AI programs that balance innovation with responsibility
- Practical approaches to AI governance and risk management
- The role of leadership in fostering ethical AI culture
- Similarities and differences between government and private sector AI ethics
- Best practices for AI ethics auditing and compliance
This episode provides business and technology leaders with actionable insights from the intelligence community's approach to AI ethics, offering valuable lessons for organizations developing their own ethical AI frameworks.
[Here is a link to the National Security Commission on Artificial Intelligence (NSCAI) report mentioned during the discussion.]
Episode Highlights
Build Trust Through Responsible AI Governance
- Create clear governance frameworks with documented roles, responsibilities, review cycles, and decision-making protocols to demonstrate responsible AI use. Establish regular assessment schedules based on each AI system's risk level and rate of change.
- Develop transparent documentation showing how AI systems are developed, what rules they follow, and how they protect user privacy and rights. This builds confidence with stakeholders and enables trust-based partnerships.
Establish Cross-Functional AI Ethics Programs
- Secure executive sponsorship and assemble interdisciplinary teams, including legal, technical, policy, and domain experts, to develop effective AI governance initiatives. Having data scientists work alongside policymakers ensures frameworks are both principled and practical.
- Before deploying AI systems, create thorough impact assessments covering privacy, security, bias, and other risks. Regular reassessment is crucial as capabilities evolve and new risks emerge.
Learn From Government Experience
- Study mature federal frameworks around data privacy, impact assessments, and transparency reporting that have evolved over 50+ years. Government experience offers valuable lessons for private-sector AI governance.
- Adapt proven public sector documentation standards and oversight mechanisms while accounting for your organization's specific context and requirements. Look especially at Privacy Act compliance models.
Implement Dynamic Review Processes
- Move beyond traditional "set and forget" technology governance to establish flexible frameworks that adapt to rapidly changing AI capabilities. High-risk systems need more frequent assessment than conventional technologies.
- Monitor emerging regulations, technical advances, and societal expectations to ensure governance keeps pace with change. Build adaptable processes that can evolve as new use cases emerge.
Balance Innovation with Controls
- Define clear boundaries and standards that enable rapid innovation while maintaining appropriate safeguards and oversight. Like adding brakes to cars, good governance enables faster progress.
- Focus governance intensity based on risk level and potential impact. Critical AI systems require more robust controls and frequent review than lower-risk applications.
Key Takeaways
Trust is the Foundation for AI Success
Building and maintaining stakeholder trust is essential when deploying AI systems. As the CIA's experience shows, organizations must demonstrate responsible AI use through clear governance frameworks, consistent oversight, and transparent communication about how AI systems are developed and used. Without trust, organizations risk losing their authority, customer base, or social license to innovate with AI technologies.
Dynamic Governance is Critical
Unlike traditional technology systems, AI governance cannot follow a static "set and forget" approach. Organizations need flexible frameworks that adapt to rapidly evolving AI capabilities, emerging regulations, and changing societal expectations. This requires regular reassessment of AI systems based on their risk level and rate of change, with more frequent reviews for high-risk or rapidly evolving systems. The pace of change in AI technology demands continuous monitoring and updating of governance approaches.
Build Cross-Functional Teams for AI Ethics
Effective AI governance requires collaboration across technical, legal, policy, and domain experts. Having data scientists work alongside policymakers helps ensure principled and practical governance frameworks. Organizations should establish interdisciplinary teams with executive sponsorship to develop comprehensive approaches addressing ethical considerations and operational requirements. This collaborative approach helps create more effective and implementable solutions that work in the real world.
Episode Participants
Jacqueline Acker, Esq., is a Deputy Privacy and Civil Liberties officer at the Central Intelligence Agency. She earned a Bachelor of Arts from the University of Texas at Austin in 2010, followed by a Juris Doctorate from the University of Tulsa College of Law in 2013. Her academic journey set the stage for her future achievements, particularly in the areas of privacy, data security, and ethical technology. Since January 2022, Jackie has been an Adjunct Assistant Professor at American University Washington College of Law, teaching Information Privacy and Data Security. Her role involves educating students on privacy laws and integrating emerging issues into the curriculum, preparing the next generation of professionals in this critical field.
Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep expertise in digital transformation, innovation, and leadership. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.
Transcript
Michael Krigsman: Welcome to CXO talk episode 863. I'm Michael Krigsman, and today we're discussing AI governance and ethics. Our guest is Jacqueline Acker, who is a deputy privacy and civil liberties officer with the CIA. She will share her personal perspectives on these topics.
We all know the CIA, the Central Intelligence Agency, but we're on the outside. So tell us about the CIA and tell us about your role and what you do.
Jacqueline Acker: We work on issues ranging from running a traditional privacy program, like every federal agency has, to implementing statutory requirements. Some of those include making sure that the public has the ability to submit complaints, that their rights have been violated.
We're protecting Americans' information under executive orders and developing AI governance that is with the intelligence community and the wider federal government requirements. And we also support the CIA's chief transparency officer. So, the goal here is to make sure that the public understands our mission, our authorities, and our oversight. And then we also work with executive branch oversight and congressional oversight and publish reports for the public. So that's just a piece of what we do.
There's so much more to it; I'm so proud of my team. My other work includes being a faculty member at the International Association of Privacy Professionals, which has been rebranded recently as IAPP because the scope has expanded. As you've probably heard from many of your guests, privacy work is getting so much more integral into other aspects such as cyber security and, of course, even AI governance. So, they're trying to tackle a lot of those intersecting issues.
I just wrapped up my sixth semester teaching at American University for their Washington College of Law, their master of legal studies program. So I was really happy to be able to do that. I am also very proud of all of my students.
Michael Krigsman: We all like spy shows about the CIA and conspiracies. Do you folks in the CIA watch these shows? And what do you think about these shows?
Jacqueline Acker: Well, I can't speak for everybody at the CIA. But I am. I haven't seen every show there is out there, but there are a few that I follow pretty closely. I've enjoyed the new Jack Ryan series with John Krasinski. I loved the Americans. I watched every episode of that. And then kind of followed Carrie Russell on her journey through the Diplomat. Lately, however, my favorite has just been Slow Horses because it cannot get any better than Gary Oldman. A long-time favorite of mine.
Michael Krigsman: So when you watch these shows, do you say to yourself, "Oh, that's a close reflection of what we do in the CIA," or do you say, "That is total science fiction fantasy?" Like, how do you relate to these things?
Jacqueline Acker: I would say more grains of truth in them than others. Like, I'm not running around in foreign countries. Like, I definitely sit at a desk most of the day. So for me, it's always just a different perspective, maybe.
But I think it is amazing seeing how Hollywood has always portrayed a lot of what we do. And same thing with our counterparts in Britain and some of the amazing movies, James Bonds and things like that. I personally don't see myself as one of those. But I do enjoy watching them, especially when every once in a while there is something where you're like, "Oh, that has a grain of truth in it."
For me, those moments are usually like how amazing the co-workers are and how much they take care of each other and things like that. That there's kind of a saying that at CIA, we're family, and I think that's the part that I like seeing in these Hollywood portrayals.
Michael Krigsman: So that sense of taking care of each other, that resonates with you as being real?
Jacqueline Acker: That part, yeah, absolutely. I've been at CIA for over 11 years now and I think that's, when I started there, I didn't know how long I'd be there. I was like, "Oh, this sounds like a cool job."
I had a friend who was like, "Hey, we're hiring lawyers, you should come over." And I was like, yeah, of course, I'm going to go do that. That sounds so cool. And then I got there and I was like, well, we'll see how it goes. I'm fresh out of law school trying to figure out how I want to spend my career, and I just came from a really wonderful firm out in Oklahoma.
So, it was kind of a big leap for me to try something so drastically different, but the people are why I've stayed and the mission that I've been able to work on as well.
Michael Krigsman: Why is AI ethics important to the CIA?
Jacqueline Acker: For us, what's special is it's about the trust of the American people. So if we don't continue to use AI responsibly and ethically, then we could lose that trust and lose the trust of our partners around the globe, around the country, and industry and things like that. So at the very bottom line, it's really all about trust.
Michael Krigsman: Trust in the agency and the role of AI ethics is ultimately to maintain that trust.
Jacqueline Acker: Trust is a lot of things, but part of it is getting it right. Part of it is making sure we're protecting those American values, privacy, civil liberties, things like that. And when we don't, if we don't maintain that trust, then the consequences could include things like losing our authorities, our authorities to collect data and things like that. And that would make our mission of protecting the public that much harder. So trust is almost a fundamental part of what we have to do.
Michael Krigsman: When you think about AI ethics, can you describe what that encompasses for you?
Jacqueline Acker: I think that can change from organization to organization, from person to person. So in addition to trust, I think it's also making sure that others understand what you're doing. So that is part of transparency and making sure people understand that your product is trustworthy, making sure they understand how it was created. What were the rules that you were following when you made it?
And for the people I talk to in industry, that can also mean explaining that to a consumer, explaining that to your insurance carrier, even. So, being able to make those guarantees, whether it's for sales or to government oversight. I think the transparency piece is a big part of the ethics.
It's also, I think there's so much more to ethics and creating responsible AI. But I would say having a good program in place, having processes, having some sort of governance and oversight mechanism, those are all really key pieces to it.
Michael Krigsman: So in terms of having a program in place, can you describe what are the components of an effective AI ethics initiative?
Jacqueline Acker: It really does depend on the organization. So factors could include what context you're working in. Are you in healthcare? Are you in education? Where are you located? Are you in the EU, for example? Your organization's opinion. What do they value? And all of those things play into building your program.
Because you have to really define first, what does your organization mean by AI ethics? So, for example, in the intelligence community, starting in about 2019, we wrote the Principles for Intelligence Transparency. And those principles were based on general principles for ethical intelligence that had been around for a while, but we use that and lifted it and shifted it for artificial intelligence.
So we built on something that already existed, this kind of ethos that we already had. And then we said, okay, well that's great, but how do you actually do that? So we worked really hard with data scientists, lawyers, and people from at the time, I think it was 17 different intelligence community agencies. I don't think Space Force was there yet.
But we worked with people across the entire community to create a framework that was basically a set of questions meant to work with any agency's authorities and be very flexible even with what kind of technology it's about, so that everybody has a common script to go through for, okay, well how do we make sure that we're upholding these values? Like, what do we mean by ethics? What are our data requirements and things like that.
So, that was a really key piece for us in the intelligence community. It got signed in 2020, in June, which was a kind of a crazy time, but it was really important that even in the midst of all of the big changes, we were setting that foundation because having a governance framework out early, and also having one that was created by so many people and therefore can be adopted writ large was really key to us because we knew we were going to have to use this technology.
And if you set some minimum guardrails, you can go faster, right? So like cars originally didn't have brakes and so they couldn't go very fast. But once they had brakes, they could go faster. And for us a lot of that's also still about that trust and making sure things were safe and everything.
Michael Krigsman: At an organization like the CIA, when you say that it's signed or adopted, where does that adoption or authority to adopt sit inside the organization?
Jacqueline Acker: For the ethics framework, that was signed by the then director of national intelligence. So that was at the DNI, and they're I would say like the integrator across all of the IC. So for IC-wide policy, that's really the appropriate place there.
That's not going to be the same at a lot of different organizations. Maybe large companies have subsidiaries and things like that. But for governance and for ethics, even like within the federal community, there's a lot of different ways to do it. You can have that be more centralized or you can have it be decentralized.
You could have hybrid approaches where there's one group that creates the overall policy and then the people who actually carry it out are maybe integrated into different parts of your organization or agency. So this framework, we had to write it to be able to work in all of those different environments, which was a really fun challenge, but I think we're here four and a half years later and it still works really well. So, I'm very proud of that.
Michael Krigsman: Please, subscribe to our newsletter. Check out cxotalk.com. We have excellent shows coming up. Subscribe to our YouTube channel. We have an interesting question that has come in from Twitter, and this is from Arsalan Khan, who's a regular listener. He asks great questions. And he says, "Who defines the ethical boundaries? Who should or shouldn't define these ethical boundaries?"
Jacqueline Acker: At every organization, that's going to be different. I talked earlier about how the organization's culture matters, but also things like, what about where are you located? So geography, for example. I think some ethical boundaries are just built into the laws where you are.
So we've seen a lot of that in the United States with different state level rules on AI. Of course, with the EU, there's the EU AI act. So you have it at, for AI specifically at those kind of levels, but I would say even more granularly, there are rules that you maintain data requirements, right?
So a lot of AI ethics is actually just data ethics. What data are you allowed to have? How long are you allowed to have it? What protections does it have to have? So some of that is just your baseline legal requirements. Some organizations go a little further, they take things like making sure that you have the minimum amount of data necessary to complete your job very seriously, and others are kind of like, "We need all the data."
So that organizational culture is really a critical piece of that on top of what is your baseline? Like what can you do under the law? What do you have to do? I think the more fun area to be is in that in between. So what should we be doing? What are ways where we can be ethical but also meet our mission?
So, that is a harder thing to figure out who decides that. And again, for your organization, it might be your shareholders, it might be your CEO, it might be somebody else. We're seeing a lot of organizations start to have chief ethics officers, even chief AI ethics officers.
So I think there's still kind of some norms forming around that in the AI space, but I don't think of it as being really all that different from where we were previously, where we already had these data issues, ethics issues about technology, who should have access to it, and all those things.
So AI, I think the big change has obviously been with generative AI and kind of the focus on that. And so there's been a bigger shift towards formalizing some of those choices on who is a decision maker, but even one person in an organization should be working with stakeholders across that organization to try to make those chances and paying attention to just like what's happening in the world. What are where are ethics norms forming around wherever your organization is or where maybe where your clients are, things like that.
Michael Krigsman: Are there unique aspects of this that are specific to intelligence organizations that might be different from what we're used to in the private sector?
Jacqueline Acker: There's some pieces that are unique. I would actually say there's some pieces that could be borrowed by the private sector. So the intelligence community, we're part of the federal government, and for over 50 years, we have been doing privacy, thanks to the Privacy Act of 1974.
So there's a lot of requirements in that, but a lot of the basic requirements include making sure that information about US citizens and lawful permanent residents is protected appropriately. And what does that mean? So that means in certain systems, if you're able to retrieve information by a unique identifier, you have to have transparency to the public about what that system is, why does it exist? What protections are on that?
And you have to publish that on your federal agency's website so that the public can go request records about themselves, and they basically have this right to access their records and then amend them if they're wrong. So there's been a big check on the federal government for 50 years, and that requires a lot of work to make sure that your data is protected appropriately.
And there's a lot of accountability there. So I think there's a good chunk of that piece that companies can learn from. You can go to any federal agency website and see examples of that. Department of Justice, of course, has some amazing resources as well.
But even things like a privacy impact assessment. A lot of organizations are looking at the federal government's privacy impact assessments and saying, "Oh, we can shift this into an AI impact assessment." It meets a lot of the standards or they're using those already for their internal privacy programs and then just building on that.
So there's that piece that's kind of special to the federal government that I think can be borrowed from. But the other pieces that are, I think, challenging are, in the intelligence community, we can't always be as transparent as we would like because we have to keep sources and methods safe.
So, we can't always share the ways that we got information or who we got it from, and things like that or like the technology that we're using with the general public because then we wouldn't be able to continue using those. So, instead, we have this really robust system of governance and oversight, with people who are cleared.
Michael Krigsman: And of course, what you're doing at the CIA has a lot of maturity, and so there are lessons, I'm sure that the private sector can learn from that.
Jacqueline Acker: Yeah, absolutely. I mean, after 50 years of doing the Privacy Act, federal agencies really have some great experience there. And it's not always been perfect. There's definitely with any new rules and any new technology, there's challenges and there's growth.
But I think that I always think about this actually, how like the Privacy Act is older than most tech companies by quite a bit. So the maturity of the federal government is something that should definitely be lifted and shifted and used where it can be because the experiences are in many ways universal.
But you're right, the maturity level is certainly there. And you can see that in the documentation that exists and also just in how the transparency has worked. And there's also been this accountability, not just with the public, but with Congress and other oversight through that. So for any organization that is going to potentially have that kind of scrutiny, like they should absolutely be looking at the mature programs from the federal government as a starting place.
Michael Krigsman: We have a couple of more questions that have come in and interestingly, both of these questions now relate to international relationships and boundaries. So let's try these. And this first one is from Greg Walters on LinkedIn, and he says, it's his contention that the AI revolution, including privacy is being led by the US and that the US is the best hope for an ethical and free deployment and use of AI. And he asks this, "What real threats to freedom and liberty can our adversaries implement with AI aside from drones?"
Jacqueline Acker: There's a values challenge internationally right now. And the US has definitely been trying to lead on this. We saw this even with the National Security Commission on AI report. It was like 700 something pages long. It came out a few years ago before the generative AI boom.
But that report talks a lot about how this is a values competition. And how there are so many different threats to values. And one of the things that I personally find interesting about all of this is the need to protect American values, that's obviously part of what my office does.
But the intersection of that need with also the need for the United States companies to be successful. I think there's a lot of there you can read eight different news stations articles and get eight different answers about like what part of that is important and who should win out and what that means for the international community.
But I really like that report. I think that report is something that still stands pretty well on explaining why some of those values systems could be in peril, but what those competitions really are at that international stage. And a lot of the kind of predictions that it had, I think have been playing out. So if you don't have time to read 700 something pages, I'm sure you can find an overview somewhere or just read the executive summary.
Michael Krigsman: It's very interesting to hear you talk about this values competition, and then the relationship between that and the need to maintain an ethical stance. Is there a tension or conflict between these two different goals?
Jacqueline Acker: I think there's so much alignment here. I strongly believe that creating technology, using technology, and doing that in alignment with your values actually helps because it helps maintain that trust, it helps create an ecosystem where you can work with partners, whether that's in the private sector or overseas. And that really helps bring these kind of like alliances together of, "Hey, we all have the similar value structure, so let's work together."
I think that's a huge strength. I think that's a huge American strength. But that is part of why it's so important to get it right. And that report really does a fabulous job of explaining all of those things. But personally, I really do believe that there's alignment there. I think that values are a big part of our strength.
Michael Krigsman: And we have another question, again from Arsalan Khan, relating to this international set of issues and he says, "When data is shared across organizations or countries, whose ethical boundaries should be followed?"
Jacqueline Acker: It's so important to have these frameworks, these international frameworks, that basically say, "Hey, here's our shared set of values." And we've seen a lot of these, OECD definitely has several. We're seeing things out of the UN and otherwise, but I think that the best way is to have those standards and have people adopt them and say, "Yes, we're signing onto that."
Because it shouldn't necessarily be like who's standards are winning. It should be an agreement. And those agreements are important because it says, "Hey, like my values align with your values. Now we can work together." And we've seen that anywhere from people adopting Nist internationally to ISO standards. So that's kind of how I would approach that instead of thinking about it as like a who's winning scenario.
Michael Krigsman: What steps do intelligence organizations take to mitigate bias in AI systems for intelligence gatherings or threat assessments?
Jacqueline Acker: Any part of just the data life cycle, not even getting to AI, there's pieces that are important. So we have strict authorities about what data we're allowed to get in the first place. So that's kind of one area where bias could be introduced or it can be reduced.
But typically, what we're looking at is something that is biased towards what we're allowed to do, which is actually pretty narrow. So there is some bias there, but when you're looking at AI and developing AI, there's a million different questions that have to be asked because bias can actually be something that's helpful.
So if you need to be able to find X topic, say like widgets or something, and you need to be able to find those widgets in pictures from anywhere, you kind of want an AI that's biased towards finding those widgets really well.
So the first question, I think you have to ask is, okay, well what kind of bias are we concerned about? So there's bias that can be good, and then there's obviously bias that we really don't want. So, bias that could harm individuals or harm civil liberties or privacy or harm a specific group, or not have equal application.
But we have our IC AI ethics framework, has an entire section on mitigating undesired bias, and it has a set of questions there. And that's a really good starting place for AI, but also, I mentioned earlier, it applies to a lot of different technologies. And so anytime you're using a lot of data, I think those are great baseline questions to start from. And then, of course, the Nist standards are always fantastic.
Michael Krigsman: What about the internal processes when it comes to ensuring that data, algorithms don't have undesirable bias? Can you talk at all about how the intelligence community manages that?
Jacqueline Acker: A couple of different things coming together. So organizations are different sizes. They need to be able to work within the existing programs that they already have. So for some organizations, they're going to be working those issues through their existing like accreditation process, or sometimes it's under a CIO's office.
Some of that could be woven into with the data scientists who are working on the cyber security side. Sometimes they're the ones thinking through, "Okay, well what are is this data set even complete? Like what do we really need this data set to be? And then how do we need to protect that data? How do we need to protect that algorithm? What are different potential injection points into that?"
So sometimes it's kind of done by, "Okay, well we have the experts and they're already over here," so that's who's going to do it. But it really does depend based on, if you ask eight different organizations, you're going to get eight different answers.
But we are seeing more coalescing of some of those and some more professionalization and formalization in organizations around that. So you might have more of a process for, "Okay, an AI is going to be created." We're going to document what the purpose is, and what the potential risks are, what the potential biases are, what the mitigations are, and then have kind of different people who are responsible for each piece of that, including the documentation piece.
So it's getting, I think it's getting a little bit more formalized, but I would say it's still very much organization dependent.
Michael Krigsman: Arsalan Khan comes back yet again. He's really on a roll, and on this point, he says, "If a handful of countries develop AI ethical standards, then that itself creates a bias since other countries' perspectives might not be included, and is the UN a better place for AI standards to be developed for that reason?"
Jacqueline Acker: I honestly have not put much thought into that. So I would like to defer that one. But I will, I don't know. I think it's a really good question. It's one that I should have asked myself. So I'm going to be thinking about that probably all weekend.
Michael Krigsman: Last week on this show, we had as a guest, the Assistant Secretary General of the UN for technology. His actual title is chief information technology officer. And I would imagine that he would argue that the UN is the right place to do that, but it's an interesting question because so many different countries have different levels of maturity when it comes to the development of privacy regulations and issues around AI ethics, but as you mentioned earlier, there is also divergence in terms of their goals and their competition for perspectives here.
Jacqueline Acker: It's been on my to-do list to watch last Friday's episode, so I'm a little behind.
Michael Krigsman: For an organization that wants to start implementing a program of AI ethics, where should they begin? And I mean implementing it in a serious way, not just giving it lip service.
Jacqueline Acker: That means having your C-suite being involved. First of all, making sure that there's executive backing. It doesn't always mean having somebody whose entire job is to be the chief AI officer. We've seen a lot of that at some organizations, particularly bigger organizations, but sometimes that title is added on to like the chief data officer or the chief privacy officer.
So depending on the size of your organization, it's really going to depend. But the important thing is having somebody in the C-suite who is backing that. And I've seen a lot of success for the models that are doing, having somebody who's a chief AI officer to coalesce everybody.
But the success of an AI governance program is really built by so many more people than that. So you have to get your CISO on board, your CIO, your chief data officer. If those are all the same person, then congratulations, you're probably also the AI ethics officer as well now.
But really figuring out just at a baseline, what do you already have in place? I mentioned earlier, like assessment and authorization processes, any sort of data impact assessments that you already have, what are your privacy requirements? So you really have to figure out at a baseline, what do you have, and then what do you need to add to that to be able to do AI governance.
So kind of what's the gap analysis that you need to do? And for some organizations that really is enough. Hey, we're going to add a few questions to our privacy impact assessment, and now it's an AI impact assessment. We need somebody to sign off on the cyber security aspects of AI and the data, and make sure that that part's being considered a little bit differently.
But one of the big changes, I think that we see with AI is that you can't, and not that you should be doing this for regular systems anyway, but you can't just like set it and forget it and come back to a system like once every year or something. How often you come back to that is going to be periodic, and it's going to depend on a lot of factors, including like what is the purpose of the AI? How often is that algorithm changing? Is it machine learning and it's changing some every day?
And what are the consequences? So if the consequences are really significant, you can't just like look at it once a year and see if it's still working. So that, I think is going to be one of the pieces that has to change the most for a lot of organizations, is having enough of a process in place to go back and look at your AI on a periodic basis that's appropriate for that exact use case.
Michael Krigsman: I have to imagine that will be a significant change for some organizations because if you're used to dealing with older non-AI technologies, you put them in place and, of course, it's basically static. But with AI and especially generative AI, as the models change and as the prompts change, the results can be very, very drastically different.
Jacqueline Acker: They can be completely different. And then the other thing that's changing almost as quickly, it feels like, is the legal compliance regimes around this. So there are new laws coming out, it feels like every day. New standards being created, it feels like every day.
And then boom, suddenly there's a text to video generator and you have to rethink everything. So making sure that there are people who have the time and space to be focusing on, "Okay, well what are the new technologies that are coming down the line?" And can build something that's flexible enough to anticipate significant change that might happen short-term.
But things that probably aren't going to change are things like, "Okay, well we need to have some sort of data impact assessment or if we're procuring things, like we need to have some baseline procurement processes." And there's been some great work being done on procurement by a number of different organizations.
So I think those are critical pieces for a lot of organizations. Making sure that you have your lawyers involved, obviously, especially in that procurement or if you're selling something. But yeah, also having people who have the space to focus on, "Okay, so what laws might be coming down the pike?"
Because if you are working on creating a governance program, but you know there's going to be a big compliance regime being implemented, you just build towards it now. And think through too, it's easy for me to say all of this, right? But nobody has nobody has time to do this for every single AI.
So really being able to rack and stack like where that risk is the highest and focusing your energies on that.
Michael Krigsman: You've raised a really interesting point. With AI in particular, you can't, as you said, take a static view because it is changing so rapidly, which places an additional burden on folks who are dealing with these issues to not only manage and lead what's inside their organization and the ethical issues around that, but to a very large extent also be looking outside into the environment, into the changing legal environment, the social culture environment, and the expectations. So that's like a very large, it becomes a large mandate.
Jacqueline Acker: That's part of why we're seeing more organizations have somebody who is a chief ethics AI ethics officer, or why we're seeing some expansion. And I don't want to put words in their mouth, but I think that this is one of the things I've really liked about how IAPP has been expanding their offerings is understanding that a lot of this has been falling on privacy and compliance officers, so they need to know more about AI. They need to know more about cyber security.
They need to know more about these different areas. So I actually helped them develop the AI Governance Professional Certification over the last couple of years, and I helped teach that class, to make sure that we're kind of bringing up the next generation of AI governance professionals. And the people I've had in my class hail from far beyond privacy. And so it really is expanding, but kind of flexibility and anticipation are the new nature of the game. Like you said, it's not static anymore. You can't do like a waterfall development process. You really have to be more elastic.
Michael Krigsman: And it's so interesting that if you're not, if you don't have your ear to the ground when it comes to likely potential regulations and social or cultural expectations, you can really be screwed right out of almost out of thin air because you weren't watching the broader view beyond your narrow domain.
Jacqueline Acker: That's really true. And there's so many pieces that there's still norms forming around too. So anticipating some of them, but also like, if you have the ability and in your organization to be able to actually do some of this and figure out how to do things like, "Okay, well, what should an AI incident response program look like? Is it the same as a privacy incident response or a cyber incidents response program?"
Probably not because where an AI goes wrong could be, the data was wrong. So you're going to have a different set of stakeholders in place. But if you can help create processes that work, those might start becoming the norms, right? And go teach people, "Hey, here's what's working for us."
We had an incident, and now, because we had this process, we knew where to go back and fix the algorithm. We got rid of this errant data stream and now it's working again or whatever your system is. But yeah, I think if you have time to focus on some of those more nascent issues, you may also be able to help influence what those norms are. And make sure that it actually works for an organization.
And I say this as a policy person, but like, I try to bring a data scientist along with me to every conversation I have, because like, nobody wants the lawyers or just the policy people to be writing this. Like it has to actually work at the end of the day. So I would very much foot stomp having an interdisciplinary team on that.
Michael Krigsman: That's definitely important. If you start making policy without having a pretty complete understanding of the implications of the technology, the data, how it's actually being used and what the implications are, then your policy is going to miss the mark.
Jacqueline Acker: Yeah, absolutely.
Michael Krigsman: Can you talk about governance? You've mentioned governance a few times. Can you drill into what do we mean or what do you mean when you talk about AI governance?
Jacqueline Acker: If you are still struggling to get your data governance program in place, I would say continue that work. But with an eye towards what you might need to do for AI as well. But governance means a lot of things. At the baseline, it's processes that enable you to have responsible AI, ethical AI.
But some just very high level parts of governance in my opinion, are things like, "Okay, define what is it that you're actually trying to do with this system? What is the scope of it?" You know, what is its operational environment and what specific tasks are it going to perform? Because if you define those, then you can then do this analysis of, does that actually work within our organization's ethical principles and our goals?
Because sometimes things sound like a great idea, but then when you start taking a step back or peeling that onion back, you realize like, well maybe that the impact of that is actually not aligned with our goals. So having governance, to me is even just sitting down and having that conversation upfront of like what are we trying to do here and is that part of our organization's principles.
And then from there, "Okay, well what data do we need to do that? Are we allowed to have that data?" What do our stakeholders think about this, whether that's internal stakeholders or external stakeholders? Are you in a highly regulated environment? What rules apply? What rules might apply going forward that we should plan for now?
There's some really interesting work being done by various different, I would say like a lot of former government people too, where they're saying like, "Hey, if you're trying to build something, if you're trying to build something that's going to do X, you should probably figure out how to do privacy protections up front."
Michael Krigsman: We have another question from Twitter, again from Arsalan Khan. Arsalan is really on a roll. And he says, "Should AI be regulated? And if so, what about the impact on innovation?"
Jacqueline Acker: I really do believe that when there's the right balance of regulation and governance, it can help actually speed up some of that innovation. But this is so much harder with AI, I think than with some other technologies because some regulations are very much looking at just the AI technology that exists today, and some of the AI technologies that people are building towards, maybe you need a different type of governance altogether for.
So I think we have to be very conscientious to get that balance right as a society. But going back to the car analogy, I mean, cars didn't have brakes, and so you could only go a few miles an hour. And then once you added brakes, you could actually speed up.
Okay, well, extend that out. So somebody said, "Okay, well we should have roads and we should make it where in some countries you can take a right on red and others it's not allowed or whatever." But like those things allow different cars with different drivers to be on the road at the same time and still move around.
And I think that's where we need to be able to get, is this kind of nice equilibrium that allows innovators to know, "Okay, well, I know what's not what I know what's legally allowed. So I know what space I have to move within that, so I can move fast and not worry about somebody coming in afterwards and saying, 'Okay, well you can't do any of that. Your whole company is gone.'"
We have to have some amount of rules of the road so that they can work together. But also, I think that's going to be important because AI is going to have to work with AI, right? It's not all about humans anymore. Like we're going to have to have some standards so that we can say, "Okay, well, this AI is going to be interacting with this AI," and how do they know that they can do that?
So we have to be able to build towards like the world that's going to exist, instead of just the one that we're in today, and that's a huge challenge. I don't pretend to know the answer. I think we're all working to get there a little bit at a time every day.
Michael Krigsman: It's a very interesting point that having clarity around what's allowed, what's not allowed, enables interoperability to take place and predictability as well, which is so important in for any organization.
Jacqueline Acker: Yeah, and for sales too, right? Like if somebody's going to, I mean, I don't know why this is what just came to mind, but I had a Furby as a kid, and I don't know if anybody even remembers those, but there was a lot of controversy when everybody bought them and then they kind of did different things.
And people were like, okay, we didn't know what we were getting into, like I would like to return my Furby or like, we're like my kid's Furby is now in a closet, buried underneath like a bunch of coats or something so like nobody can hear it.
That's not what you want, right? Like you want your technology to be adopted and trusted, and for people to say, "Hey, this is this was really well done. Like, I understand how it works. I have as much transparency as I can." So there that trust is still, I think at the baseline of a lot of this.
And you have to have the trust has to go both ways. The technologist have to trust that the policy people aren't going to just change their minds all of a sudden, right? So that conversation really has to have all the parties at the table. And I mentioned this earlier, like I like to bring a data scientist with me basically everywhere. If I could actually just bring one everywhere, that would be fantastic.
But it's we have to make sure that just because something sounds like a great solution, it will actually work. And a lot of times, the data scientists are the ones who come up with better policy solutions than the policy people because they understand really the nitty gritty of like, "Okay, well here's what's possible," or like, "Hey, and that's not what we're building towards."
So making sure that that is a conversation not just happening kind of as a blue sky exercise with a bunch of policy folks is really critical.
Michael Krigsman: That's definitely important. If you if you start making policy without having a a pretty complete understanding of the implications of the technology, the data, how it's actually being used and what the implications are, then your policy is going to miss the mark.
Jacqueline Acker: Yeah, absolutely.
Michael Krigsman: What advice do you have for business people? What can business people learn from government efforts?
Jacqueline Acker: There's so much out there that you can pick and choose from and kind of make sure that you're tailoring it to your organization. And looking at anything from, "Okay, well what are transparency reports looking like?" I mentioned data impact assessments, privacy impact assessments, as well as I briefly mentioned like system of record notices that explain to the public, here's how we protect your data. Here's the requirements there.
Those are all fantastic resources to get started if you're like, "I don't know what to use." But also, I think not just because the government has experience, but they also have lessons learned. So I would learn a little bit about the history there. But also maybe learning about what's not the same. So if we're working for the American public, maybe not for shareholders. You know, what is that nexus and like what might you want to do a little differently.
Michael Krigsman: Very interesting. Well, a huge thank you to Jacqueline Acker, who is the Deputy Privacy and Civil Liberties Officer with the CIA. Jackie, thank you so much for taking time to be with us. Your very interesting comments.
Jacqueline Acker: Oh, thank you. I'm glad to be here and share kind of my perspective and thanks for all the great questions too. I know I will be thinking myself about some of those all weekend.
Michael Krigsman: Yes, thank you to everybody who watched, you folks who asked such great questions. You guys are really smart.
Before you go, please subscribe to our newsletter. Check out cxotalk.com. We have excellent shows coming up. Subscribe to our YouTube channel and we'll see you again soon everybody. Thank you so much and have a great day.
Published Date: Dec 13, 2024
Author: Michael Krigsman
Episode ID: 863