AI Ethics and Responsible AI for Data Scientists

As data scientists and business leaders, we need to think about the ethical and privacy considerations of machine learning and artificial intelligence. FICO's Scott Zoldi shares recommendations around responsible AI, ethics, and privacy during this important conversation.

44:20

Oct 15, 2021
6,966 Views

As data scientists and business leaders, we need to think about the ethical and privacy considerations of machine learning and artificial intelligence. However, this isn't always easy. What happens when your model is wrong? Or if your model learns something you didn't teach it? And what about fairness -- do certain groups get better or worse results than others?

To help demystify these topics, FICO's Scott Zoldi shares recommendations around ethics and privacy, as well as practical advice for how AI teams should respond to mistakes or issues that may pop up during testing.

The conversation covers these topics:

Scott Zoldi is chief analytics officer at FICO responsible for the analytic development of FICO's product and technology solutions. While at FICO, Scott has been responsible for authoring more than 100 analytic patents, with 65 granted and 53 pending. Scott serves on two boards of directors, Software San Diego and Cyber Center of Excellence. Scott received his Ph.D. in theoretical and computational physics from Duke University.

Transcript

Scott Zoldi: We persist our model development governance process and we define it around the blockchain technology, so it's immutable.

Michael Krigsman: Scott Zoldi is the chief analytics officer of FICO.

Scott Zoldi: We're a company that focuses on predictive models and machine learning models that help enable intelligent decisioning, decisions such as fraud detection, decisioning such as risk. At the heart of our company are analytic models that are driving these decisioning systems.

My role as a chief analytics officer is to drive the decisions we make with respect to machine learning and analytic technologies that enable these decisioning software. That includes a lot of research in areas such as machine learning as we address the increasing digital needs out there with respect to decisioning systems and software.

What is responsible AI?

Michael Krigsman: You're using a variety of models and collecting and analyzing lots of data. You're in financial services, and so the whole field of responsible AI is very important to you. When we talk about responsible AI or ethical AI, what do we actually mean?

Scott Zoldi: Responsible AI is this concept of ensuring that we have a level of accountability with respect to the development of these models. I like to talk about it in four different pieces.

The first is robust AI, which is essentially ensuring that we take the time to build models properly. That means understanding the data that we are receiving, ensuring that it's well balanced and not overtly bias. It includes choosing the right sort of algorithms and doing the robustness testing to ensure we build models properly.

The second part of this is explainability or interpretability of models. What is driving that decision in the model? What has the model learned and is that the right thing that should be driving decisions?

The third for responsible AI would be the ethical AI component. That is essentially making sure that it is not having a different impact for different types of people or groups of people.

The final piece, which is largely ignored today, is this auditable AI, so being able to audit how the model was developed and ensure that is going to operate as it's intended as it starts to be used to make decisions about customers.

If you do all that, you instill a set of processes and accountability around how these models are developed and ensuring that you're monitoring those to ensure that they're staying in check as they're making really important decisions about customers.

Michael Krigsman: We have a very early question from Twitter. This is from Eric Emin Wood. He's wondering to what extent is this concept of responsible AI part of your work at FICO.

Scott Zoldi: It's really central to my work. In fact, I'm really pleased; I've authored now 20 patents in this area, so it's a huge focus for myself and for the entire firm.

With the types of models that we make (for example in credit risk), we have to be very, very certain around how we develop those models. We have to have a set (corporately defined) standard for how we develop models at FICO and enforce that. One of the things that we like to focus on there is emphasizing that these types of models that are impacting customers should not be left to data scientists artistry but rather, follow a prescribed and responsible AI framework.

The way that works is that we have a corporate defined modeling model development governance process, which is adhered to with respect to the scientists where we have the checks and balances where we monitor that and we record progress along those lines so we can demonstrate not only that we want to be responsible but we are because we followed this process. We can enhance that process over time, so it's a huge focus for our firm. I think it's going to be a huge focus for every firm that is going to be applying AI and machine learning technologies in the digital world in the coming years.

AI governance and AI ethical principles

Michael Krigsman: You made a very interesting comment. You said that ethical AI, responsible AI, and managing these tradeoffs in the models should not be left to data science artistry. What did you mean by that?

Scott Zoldi: Today, what we generally will find is different scientists will have different ways they want to solve the problem, or maybe they went to university and they got trained with a particular type of algorithm that they just really know well, they enjoy, and they love. Some algorithms are better for responsible AI than others.

For example, a lot of deep learning models might be great for image recognition but they may not be the right decision when you're making a financial decision where you have to be very prescriptive around what was it that caused Michael not to get the best score and maybe impacted the quality of the loan that he was hoping to get. Constraining the data scientists to say these are the technologies that we find responsible, these are the technologies (from a machine learning perspective) that are interpretable, and this is how we're going to build models.

Although there might be 101 different ways we can build this model, we're going down these paths. They're corporately sanctioned for these reasons. We can revisit that from time to time, but we can't have Sally, Bill, and Frank all building the model differently based on the data scientist that's assigned.

It has to be really something that is constrained and something that is continually reviewed in terms of best practices for a particular firm. That's a big part of responsible AI. Otherwise, you're going to have 101 different sort of reviews of whether or not you think the model is built responsibly and ethically versus saying this is the process that we use and we'll enhance it over time but we're going to enforce it and ensure that we have consistency in the development of the analytics.

Michael Krigsman: To what extent is this data science artistry, as you were describing, the norm today relative to the organizational standards approach that you were just talking about?

Scott Zoldi: The artistry, I think, is the majority of what's happening in the industry today. One of the things that I like to talk about, Michael, is just getting companies to think about how are their organizations set up from an analytics perspective. Is it a city-state model where you have lots of different analytic teams and they define their own sort of methodologies, or do you have a centralized, corporate standard?

That's pretty rare today, but I think, increasingly, it'll be more and more common because these concepts in responsible AI is actually a board-level and a C-level conversation. It's not sitting with the data scientists.

The individual data scientist will not have the entire purview of how to ensure that their models are responsible. It really takes that corporately defined standard, but many organizations don't have that today. They have piece parts.

One of the things that I hope that every organization will do (if you're thinking about your own operations today) is to ask that question. Where is it written down, and is everyone using the same processes?

Obviously, once we define that, now we can start having a conversation on how to make it better, how to innovate around it, or how to create technology to get more efficient at getting these technologies built responsibly such that we can meet our business objectives but also ensure that we're not harming people because we have the proper controls in place around that.

Michael Krigsman: We have a question directly on this topic. This is from Wayne Anderson on Twitter. He says, "Why have more companies not formalized digital ethics and responsible AI programs at scale? Is it an issue of it's not a priority or are there larger blockers?" What are the obstacles towards having this approach being more prevalent in the business world?

Scott Zoldi: Some of the main blockers, it's just a lack of visibility around how important this is at the board level and the C-level. I'll give an analogy.

Many organizations today have their chief security officers coming to the board meetings and talking about how the organization is protected from cyber security threats. They'll have regular readouts because the organizations understand that cyber security is a tremendous risk to the organization, their operation, and their reputation.

That same thing can be said of responsible AI, but we don't have typically chief analytics officers or heads of analytics teams talking to boards about the relative risks of the AI models and machine learning that they're using in their organizations. That is probably the biggest impediment is just awareness of the risks that are being taken on.

Frankly, we've seen a lot of bad examples out there from very well-known companies. But I think, increasingly, we're seeing regulation increase in terms of what's considered high-risk and responsible ways of developing models. I think it'll become more front of mind for these organizations.

They may have to then say, "Well, okay. now we start to understand that all this AI hype around the fact, "Just let them all figure it out," is dangerous. We need to put the controls in place. Then they can start to ask the questions around how they are properly organized such that analytics doesn't have, let's say, 100 voices in an organization. It has a single voice and it's a voice that is really informing that risk profile that the company is taking on with respect to their use of AI.

Michael Krigsman: To what extent must data scientists (in your view) have a nuanced understanding of these ethical issues and tradeoffs, as distinct from the functional success of their models and algorithms?

Scott Zoldi: One of the things that I like to try to promote is this concept of explainability first and prediction second, which basically means that we have to stop rewarding data scientists for high-performing models as if that is the only business objective we have. The proper business objective should be that we understand the model, we can interpret it, we know that we're following a responsible AI framework where the models are ethical, and we're monitoring those models over time.

We need to broaden the data scientists' perspective so they see the full lifecycle of these models, including how decisions get made, so that they understand the gravity of the decisions that get made with these models and their role in that. Otherwise, what happens is the business owners become callused because they say, "Well, the model told me so," and the data scientists are removed from the business, so the customers that are impacted, because they're working in a protected, enclosed data environment.

Showing them the entire implications of responsible AI and the fact that they have a really important role to play in ensuring that they're taking these precautions to ensure that the models are built ethically, responsibly, and robustly such that they can be audited I think really helps to close this gap.

Very often, that's when the data scientists start to say, "Listen. This is way too big for me to understand on my own. I need some framework to help me." That starts this conversation around this model development governance process.

Now they say, "Now I have the guide rails that enable me to ensure that we're meeting a standard," that, say, a FICO would define or their individual company would define as a proper way of building these models.

AI ethical standards and government policy

Michael Krigsman: We have another question from Twitter relating to the governance point that you just raised. I have to say, I love the questions that come in from social media because we're having this real-time conversation with folks who are listening. The audience is so sophisticated and smart, so insightful.

Arsalan Khan, who is a regular listener – thank you, Arsalan, for that – asks, "As more and more companies use AI that can affect millions of people, potentially, should we only be relying on the good faith of these individuals and their companies to be responsible? Should the government be involved?

Scott Zoldi: I think the government will be involved. The way that I see regulation stepping up is that we have, for some time, had a number of companies that promised to do no evil and then, ultimately, find these big, big mistakes occurring. I think the government will take a role.

I think the other major aspect of this is that it's not going to be good enough to say that I intend to do well, I had a pledge I would do well, or I have an ethics board that makes me feel as if I'm doing well.

One of the things that we do at FICO, for example, is we persist our model development governance process, and we define it around blockchain technology. So, it's immutable, meaning that what backs our intent to be responsible is a model development governance process.

Then what backs the fact that we've actually done it is this blockchain technology where we can persist those steps to the blockchain so that it's immutable. We've demonstrated we followed that process.

I think, in the future, we could see a day where government agencies or regulators will say, "Can you show me what your responsible AI development framework is? Can you show me some level of evidence that you have a proof of work, that you follow it?" I think we may see more and more of that.

It's been my strong hope that organizations will develop this themselves. I see some great headway, particularly in the financial services space where different organizations are coming together to try to define that. Very much like the ISAC model in cybersecurity where different subgroups of industry have their own cybersecurity standards that they want to follow.

Ultimately, as we see some of the GDPR and European legislation around AI developing and our internal AI policies being developed around responsible AI, I think we'll see more and more government involvement because we're not seeing enough proof of work that we go beyond words and we actually have established processes that are adhered to in many organizations to ensure that proof of work.

Michael Krigsman: Scott, you just mentioned the use of blockchain to make these responsible AI decisions immutable (as you described it). Can you elaborate on that for us a little bit?

Scott Zoldi: What we have in these governments, the model development governance frameworks are, what is the model we're trying to build? What was the data that was used? Who are the scientists that are involved? What are the steps and the requirements that we need to see let's say within different sprints of a model development process?

Very often, this is an agile development process for models. We will persist the progress through the blockchain.

If a data scientist has gotten the data together, we'll say, "What does the data look like? Have we done some tests around whether there is any sort of bias in there or data quality issues?" That will get exposed.

It'll be reviewed by an approver. That approver will either give their approval or deny approval and ask the data scientists to go forward and do more work.

All those steps get persisted, along with names [laughter] to the blockchain. So, it's an accountability measure.

What we're trying to emphasize is that we are documenting (in that immutable blockchain) all those steps. So, at the end of the modeling process, we have ensured that we have done all the proper checks that are part of our process to inspect the data.

We have used the prescribed analytic technologies in terms of types of machine learning models. We have used standard codebases that define our variables. If we've added new variables, it's gone through a formal review with stakeholders so we don't learn about it two years later when we didn't know something was in there. We do the stability, robustness testing, and ethics testing. All that gets put to the blockchain.

It actually comes down even further that that same blockchain becomes the source of record that we monitor models. This is a really important point. You're not ethical and you're not responsible until you start using the model.

I could tell you that I built an ethical model, and you'd say, "Great, Scott," but it doesn't really matter until it starts to impact customers. And so, an important part of this (in addition to showing proof of work) is to monitor that model for things like bias drift.

What would happen is you'd identify what those key (what we call) latent features are that drive the behavior of the model, and you'd monitor that. As soon as those latent features start to have a distribution that starts to get misaligned, you have an alerting mechanism.

It's really important that that same blockchain is the thing that's going to inform us of how to monitor that model and maybe how to adjust to changing times—like the pandemic that we are in and hopefully getting through in the near future—because data changes and the way the models respond to change, so having that visible is really, really important. It serves lots of purposes, but it gets down to a pretty atomic level.

Michael Krigsman: You mentioned using blockchain to therefore help detect whether bias is creeping in, but your analytic methods for detecting bias can also be biased. And so, how do you sidestep that problem?

Scott Zoldi: What we're trying to focus on is, as we develop the latent features for these models, that's where we would test. Let's say we have different groups of individuals. We want to show that the features that drive the model behave similarly between different protected classes.

We can do that on the data set that we see we developed the model on. We set criteria under an acceptable amount of variation across, let's say, protected classes. Then going forward, though, what we do is those same measures that allowed us to pass an ethics review with respect to how the model was built on historical data are the same exact metrics that we're monitoring in the production environment.

That's how we go through the review of these latent features and what's accepted or not. Then we ensure that they continue to stay in line within the production environment.

Michael, we do throw away a lot of latent features. And so, we may find that a latent feature seems to make a lot of sense. You and I might look at it and say, "Yeah, that's reasonable. There's a reasonable relationship." But then the ethics testing would say, "Oh, but Michael and Scott, you didn't really realize that these two different groups have very different types of distributions and behaviors," even though it might make a lot of functional sense or notional sense why it works. And so, that's the process.

Yeah, it is to root those out and get out of things where you and I might – because of our perspectives, because of the time that we spend – we might say, "Yeah, it looks like a great feature," but ultimately, we need all that tooling in place to say, "Yeah, maybe it was. Except, no, it's not ethical. You have to remove it." Then you continue to rebuild that model and find other latent features.

Responsible AI and corporate culture

Michael Krigsman: On this topic, we have another question from Twitter, from Lisbeth Shaw who asks, "What if the responsible AI framework developed by the analytics and AI team is at odds with the corporate culture and the business goals of the company?"

Scott Zoldi: This is not an analytics governance standard. It's really a company governance standard. I think the first thing that has to occur is you have to find somebody at the C-level or the executive level that is going to be the sponsor. We don't want to have an analytics team that has no voice here.

That sponsor is not someone like a chief analytics officer. Then is it the CEO? Is it the chief risk officer? Is it the CTO that's going to take that responsibility for essentially the software within decisioning that impacts customers? I think that the question is the right one.

I'd say one needs to find that executive sponsor and it has to be at that C-level that will say yes. If it's not an analytics person sitting in that seat then it's going to be someone that is going to take on the risks associated with the usage of those models in terms of business processes, whether it be a CRO or someone else. Then we don't have this sort of disconnect between what the company wants to achieve and the risks associated with the AI model. But it's a real problem, obviously. We've seen many occurrences of this.

If an organization has troubles with that, then we have to go back to – I like to ask the question sometimes – "When is it okay not to be ethical?" Generally, people have a very difficult time with that question. We want to put ethics and responsibly first and have companies make that one of the tenants because, frankly, one can solve most of the problems in a responsible way if they take the time to do it without really impacting the performance of their models.

There are literally an infinite number of models that I could build to solve a problem. Some are good. Some are bad. And some are ugly. We want to avoid the bad and the ugly and go for the good. And so, these things are typically not at odds with one another, but it's an education thing around we can meet the business objective while still being responsible.

Michael Krigsman: From that standpoint, it's not much different than sustainability or other corporate objectives or corporate investments? Let's put it this way.

Scott Zoldi: Absolutely. I think that's the conversation that has to occur. Frankly, that's the challenge, to that earlier part of our conversation.

For the last decade, we've been hammered with "AI will solve every problem that you have, Michael. Then with cloud computing, we'll do it cheaply. With open-source, I don't really need to write the code."

But we didn't stop in all that hype cycle and all that energy to actually understand that right now, in a period of time where AI has to grow up, we have to take the same sort of fiscal responsibility for the technologies here. These ones are as dangerous as any technology that we would use within our businesses, primarily because the biases and instabilities are very often hidden unless you really spend time to prescribe how you're going to build it to expose it to the light of day and to have a conversation about it.

Michael Krigsman: We have a couple of more questions from Twitter. Wayne Anderson asks, "In the larger data ecosystem, we see very divergent values in Europe, China, Russia, and the U.S. for maintaining data. How does this affect the calculus for governing AI at scale?"

Scott Zoldi: For global companies, it's a key challenge to try to mediate all these differences. I guess I'd respond in this way. What we generally focus on are looking at the most stringent of data requirements.

If you look at GDPR and some of the European regulations, the ability to get consent, to ensure that we have an ability to demonstrate where that data came from and maintain data lineage is a best practice. Building to that best practice is probably a good idea because the United States and other countries are looking to Europe as a leader in some of this legislation and best practices. I think, in other areas where the standards might be a little bit lower, the corporation needs to make a decision around whether they are going to take the high road, I think, with respect to how they're going to conduct their business within a country.

As an example, very often I'll work with clients, and the quality of the tags, essentially whether someone is good or bad, is dirty. Maybe it combines a credit risk behavior with a fraud behavior. A client might say, "Just throw it into one model because I want to figure out loss," but those are two very different behaviors.

Very often, we have to say, "No, we're not going to do that. We have to be very precise about what that tag is."

And so, in addition to trying to adopt maybe the best practice, one might (from a corporate perspective) say, "You know what? We're not going to do the right things in Europe and then take a lower sort of position or a lower ethics position in a different region only because maybe the standards are not as strict there."

That's part of the tradeoff and the decisions that need to be made. But again, typically, it's an education thing and it's also going to be a corporate reputation piece of, will we try to meet the best practice with respect to things like the right to consent around data, the right to explain, and the right to remove data from models. If you start to think along those as a future possible set of constraints across the majority of the business, then you're prepared to meet a global demand with maybe a best practice that you work towards.

Managing bias in AI planning

Michael Krigsman: Again, Wayne Anderson wants to know, "Can you achieve bias management in responsible AI if you aren't prioritizing diversity in the workforce of machine learning?" – A very interesting question – "And how do you shake up the perspective to look for a different view?"

Arsalan Khan comes back and says he believes that diversity is important when developing AI that will address biases and perceptions. So, they're really saying the same thing. How do you weed out bias if you don't have diversity in the workforce (among the people who are writing these models)?

Scott Zoldi: Again, there are a couple of pieces here. Yeah, one wants to have diversity in the workforce. That's number one.

We have to have that focus with respect to the workforce perspective. That includes differences in terms of the types of people hired, the parts of the world we hire them from, and potentially different exposures to different parts of the business. I think that's critically important.

With respect to a governance process, these are the sort of things where this governance process is revisited on a regular basis. It could be quarterly, or it could be even more rapidly if someone has a difference of opinion.

One of the things that we will routinely do (for example in our own processes) is reassess. If we say, "These are the technologies that are not allowed," as new things develop, we're more than open to go and to change or to revise our decision around a particular algorithm. And so, we have diversity in terms of ensuring there's flexibility in the corporate model development governance standards that we use, but also ensure that every scientist has a voice.

I think the best organizations out there that I've seen really make a concerted effort to make sure that each and every person within an organization has a voice to express a difference of opinion and to challenge a standard. Usually, if that's done well, then the scientists will have that diversity of thought, these will be exposed and discussed, and the standards get better because of it.

Michael Krigsman: Another great question coming in from Twitter (again on this same topic) is, how do you address the veto power of an executive who doesn't agree with what the data is saying?

Scott Zoldi: A couple of things. I think there has to be a tiebreaker environment. One might have the veto power, but that itself should be something that would be from an ethics perspective. If you have the key executives, let's say that each of them have a veto, that would be something that we should actually be reviewed by the independent ethics board to understand why is that being vetoed.

Now, if it's being vetoed because that business owner doesn't like a set of ramifications with respect to the business outcomes, that can be reviewed in the context of what is the risk-based model. I think risk-based models are probably the very best way to address this. They're used extensively in areas like anti-money laundering.

I think we should try not to get to a flat veto power. We should look at it from a risk-based perspective. What is the risk in taking this decision and why does this individual feel this way? Then expose it in terms of a set of risk parameters that the company overall will try to take versus denying of a particular model or not.

There will situations, potentially, where maybe it's so egregious that the data analytical organizations might feel that they have to escalate or they need to really broaden that conversation. But having that risk framework in place, I think, will be a lot better than just having, let's say, an executive that is either misinformed or potentially bias in their own view to a particular type of model or an outcome, to expose it for the entire set of executives to ponder that risk-based decision they need to make.

Michael Krigsman: In summary, then you're depersonalizing these decisions, but doesn't that also require the executive to be willing to submit to the decisions that are made by that process and by that risk committee?

Scott Zoldi: Correct. I think most executives will if there's a process in place.

It is very often (at these executive levels) decisions where not everyone gets what they want. Some people have to concede. They'll look at the pros and the cons, and their concerns will be acknowledged. Then the group will make that decision.

I think that's one of the things, from a corporate accountability perspective and, frankly and effectively, working executive team that they're already used to. It's just that now we have to give them the framework so that that conversation can be had where AI, ethics, and things like this may not be their core competencies.

We need to help them with what are the risks, what's at stake. Most of them won't decide to take down their firewalls because it's expensive because they understand what the risk is. The same thing has to occur on the AI side.

Education, education, education is another really big part of this so that they understand. Most of them, if they're doing their jobs, will make very good, rational decisions when they have that framework in place.

Michael Krigsman: That's a really, really good point that, in many respects, it's not too different from recognizing the importance of the firewall, and we don't just simply take it down because we feel like it today because there's something that has to be dealt with. We have a considered decision process to get there.

Scott Zoldi: Yeah, and that's part of that growing up and that maturity. I do a lot of work in cybersecurity, and these frameworks are there for a reason. [Laughter] They're there to protect us from ourselves and well-intentioned employees that make mistakes.

That's the key thing I hope to see developed over the next five years is more of these frameworks in place. I don't think people will push against it. I think a lot of people that are perceived as not supporting ethical responsible AI just simply don't understand it or don't understand that they need to pay as close attention to that as they would, let's say, the security posture of the organization.

Michael Krigsman: One difference between the firewall example and when we talk about models and decisions based on models is the firewall is really like insurance. It's protecting against a negative event that may happen. But when you talk about these models, you're talking about predictability that can have a profound benefit on the business outcomes. That's number one. Number two, the technology, understanding the model requires a deeper technical understanding, and the implications of the model require a deeper technical understanding than simply the abstract concept, "Well, we're going to turn off the firewall," which is pretty simple, pretty straightforward.

Scott Zoldi: Agreed. Right, and so I think one of the things that will help enable this because the executives also don't understand how firewalls work or more sophisticated technology in cyber.

I think one of the things that is part of this is this example of what we call humble AI. I could imagine a conversation going like this:

"Hey, this is the risk that we have." The company, as a group – at this executive level and maybe with the help of the executive committee – says, "Okay. We are going to take the risk because we think we're okay with that risk, but this is what we're going to do. We're going to monitor it in production. If it crosses these thresholds, we are going to drop that model, and we're going to move to a simpler model that we better understand."

That concept is called humble AI. It's this sort of concept that we're going to have a backup plan. We're going to have a fallback plan.

The thing that's not occurring very often right now, Michael, is monitoring of these assets. We did a survey with Corinium, and we found that (across CIOs, CEOs, and chief ML officers) that only 20% of leading organizations actually monitor their models.

Part of this would be, "Hey, we'll take on some risk here, but we're going to have this monitoring process in place. If it crosses these thresholds, then we're going to admit that it becomes too big of a risk for us to take and we're going to drop down to a simpler model." That's what we want to get to is that fallback but also a data-driven decision.

We don't want it to be an academic conversation (at the end of the day) because, yeah, you're right. They're not going to – each of these executives – have an ability to opine on the relative risks and values of deep, deep technology. But having those guardrails in place and ensuring that if things are coming off the rails that you have a fallback plan is critically important. I think that's also part of the responsible AI piece that we haven't touched on a lot, but this monitoring and what you do as you have to pivot is a core part of remediating or removing some of that risk in that decision because you have a way to reverse that decision or to adjust that decision if things are not going the correct direction.

What is the responsible AI framework at FICO?

Michael Krigsman: We have another set of questions, the same question from two different people. This is from Eric Emin Wood and also from Lisbeth Shaw. Essentially, they both want to know what is the responsible AI framework at FICO and how is that working out?

Scott Zoldi: Ours is based on this concept of what we call an analytic tracking document. This analytic tracking document is kind of core. It's been in place at FICO for 15 years or more.

Essentially, it is describing all the steps of the modeling project, the objective, the success criteria, the tests that have to be done, who is doing the work, when is it going to get done in each of these sprints, and then how many are approved by myself and then regularly reviewed as part of sprint reviews. We have become very accustomed to running through that responsible AI framework. It's working out well.

I have some of the very best data scientists in the industry that are incredibly smart and intuitive, but they appreciate the gravitas of decisions they're making, and so they have found ways to work within the frameworks, to improve it, and to innovate around it. We don't see it as a hindrance. We see it as a way to ensure that we're operating in these responsible swim lanes. And we're protected.

A lot of our data scientists feel very well supported in terms of an ability to build the models appropriately in the first place, but also to flag issues throughout the model development. More importantly, once we establish what that looks like, we run through it, and now it's like a well-oiled machine – we generate hundreds and hundreds of models with these processes – the scientists now look for incremental ways to improve it. What I really like about it is that they are making that standard stronger and stronger each year with new IP and new ideas around how to address some of the challenges in the market. That allows us to build.

I think one of the things that data scientists really need to keep focusing on is that we get stronger as an organization or as a group of data scientists not based on an individual's achievement but based on the group's achievement. I think my team just see a lot of value in that where, eventually, you get a framework around this responsible AI development process that, in itself, will superpower each of them to achieve more in terms of better incorporation of new intellectual property, a venue to have conversations to challenge what an ethical or responsible AI framework looks like, but also to make sure that, when the models go out, no one is losing sleep around how they're performing and how we're monitoring those to ensure that they're still on the rail.

Michael Krigsman: It sounds like you have built a culture inside the company (and especially among the data scientists) that prioritize responsible AI and evaluating these models for bias, risk, and all the other things that you've been describing.

Scott Zoldi: Correct. One of the things that even my strongest research scientists will say is, "FICO is not a cattle competition." We're not impressed with small, incremental performance improvements.

We've had enough time to understand that if you see a little bit of a performance improvement, that's probably not real. [Laughter] When it goes into production, it probably won't hold up. It's not going to be robust.

We very much have gotten them, let's say, off an academic track of performance is key to one where robust and stability, continuity of models are really, really important. You're absolutely right. It's a different culture in terms of what is the success criteria for a model.

Going back to what I said earlier where I said explainability first and prediction second, what are the success criteria? Prediction does not have to be the number one. As soon as you start to adjust that – transparency, interpretability, constraining it to be palatability, ensuring that you have controls in place that it can be monitored and you can drop back to a different technology – those are really, really good success criteria where you're impacting human life.

Saying that you have a particular genie coefficient on a model and it's bigger than the one next to it really doesn't help anyone out. You need to ensure the rest of these pieces are in place.

Yeah, sure, we care about performance, and we don't usually give up much on performance. But we're not going to do that at the cost of potentially doing things irresponsibly with respect to models and harming people in that vein.

Michael Krigsman: As we finish up, what advice do you have on this whole topic for data scientists, and what advice do you have for business leaders?

Scott Zoldi: For data scientists, I'd say this: Think a little bit about your own analytic organizations. Are they talking about this actively? Is there differences of opinion? Should you all get together if you are kind of a city-state model?

Start to come up with your own AI governance framework so that you can have a common groupthink. You're not going to be able to influence an organization on the executive team if there are a hundred voices. There has to be one consensus around how to do this.

There'll be some puts and some takes but have those conversations, educate one another, and see what that best practice is by getting some of the leaders in your organization to get together. Then make it a priority for them.

For the business side, let's make sure that the risks that are associated with AI and machine learning is an active conversation that's occurring at the executive team level and at the board level. Then asking the question, "What are we doing about it? What are those processes? Is it written down?" We have these things codified in so many other parts of businesses, but not necessarily the machine learning piece.

If those two things occur at the data scientist level from, let's say, the ground level up and then at the top level (being aware of the concern and the risk that organizations are increasingly taking with these technologies), my hope would be that they meet in the middle at some point and they more effectively sort of corporately define what that standard looks like. Both sides will get what they need in terms of ensuring that models are built appropriately and the business functions at a relatively low risk while they benefit from machine learning and AI technology.

Michael Krigsman: Okay. What a fascinating and very quick discussion, quick 45 minutes. I would like to thank Scott Zoldi, Chief Analytics Officer from FICO, for taking time and sharing your expertise with us. Scott, thank you very much for being here today.

Scott Zoldi: Michael, it was a great pleasure. Thanks for giving me the venue to discuss these ideas with you.

Michael Krigsman: Thank you, a huge thank you to everybody who watched and especially to those people who asked such great questions.

Now, before you go, please subscribe to our YouTube channel and hit the subscribe button at the top of our website so we can send you our newsletter. Check out CXOTalk.com. We have incredible shows coming up. We'll send you notifications.

Thanks so much, everybody, and I hope you have a great day. Keep the conversation going on Twitter and LinkedIn. See you later. Bye-bye.

Published Date: Oct 15, 2021

Author: Michael Krigsman

Episode ID: 724