How to Lead Practical, Ethical, and Trustworthy AI

Join CXOTalk episode 825 for expert advice on ethical AI frameworks, governance, mitigating bias, and the real-world implications of responsible AI leadership.

01:03:12

Feb 23, 2024
1,860 Views

Join an important discussion with well-known experts David Bray and Anthony Scriffignano to explore the challenges and opportunities in leading ethical AI initiatives. In a world rapidly embracing artificial intelligence, it's more crucial than ever for executives to be proactive in creating systems that benefit society and earn the trust of the public.

Watch these leaders discuss:

  • Frameworks for Ethical Decision-Making: How companies can establish comprehensive AI frameworks that balance innovation with fairness, transparency, and responsibility.
  • The Role of Governance: Best practices for internal and external oversight of AI, along with predictions on where emerging legislation might lead.
  • Mitigating Bias and Ensuring Fairness: How to proactively combat the discriminatory potential of AI and build more inclusive algorithms.
  • Real-World Implications: Insights on the tangible effects of ethical AI failures and successes, examining both potential harms and societal benefits.

Episode Highlights

The Importance of Trustworthy and Ethical AI

  • AI is becoming more intelligent and influential, so it's crucial to ensure AI is trustworthy and used ethically
  • Risks include overreliance on AI, potential for manipulation, and unintended negative societal impacts

Challenges in Regulating AI

  • Many countries have inconsistent AI regulations that are difficult to comply with globally
  • Regulations struggle to keep pace with rapid AI advancements; over-regulation inhibits innovation while under-regulation enables misuse

Frameworks for Trustworthy AI

  • Trustworthy AI should demonstrate benevolence, competence, and integrity
  • Existing frameworks from OECD and others provide principles to evaluate AI trustworthiness and transparency

Societal Impacts of AI

  • AI could uplift society but also disproportionately benefit a privileged few
  • Concerns include job displacement, wealth distribution, and limiting socioeconomic mobility

Recommendations for AI Governance

  • Engage diverse stakeholders in responsible AI development and oversight
  • Establish safe spaces for experimentation, continuously monitor impacts, and adapt governance accordingly

Explainable AI and User Choice

  • Develop explainable AI systems that allow humans to understand how conclusions are reached
  • Give people the choice to opt-in or opt-out of AI-powered services

Navigating AI Ethics Challenges

  • Partner across sectors to navigate complex AI ethics issues and develop outcome-based policies
  • AI governance requires constant re-evaluation as technology advances and societal contexts evolve

Preparing for an AI-Driven Future

  • Continuously update skills to adapt to AI-driven job transformations
  • Establish social safety nets and support job retraining for those displaced by AI automation3 steps completed

Key Takeaways

Trustworthy and ethical AI is crucial as the technology becomes more influential

  • AI is rapidly advancing and being used in more aspects of daily life, raising concerns about potential negative impacts
  • Establishing trust requires AI systems to demonstrate benevolence, competence, and integrity through transparent and accountable practices

AI governance faces complex challenges due to inconsistent global regulations and rapid technological change

  • Many countries have AI regulations but they are fragmented and difficult to comply with, risking over-regulation that inhibits innovation or under-regulation that enables misuse
  • AI governance requires engaging diverse stakeholders, establishing safe spaces for experimentation, continuously monitoring impacts, and adapting policies based on evolving societal contexts

Preparing for an AI-driven future requires proactive efforts to address job displacement and equitable distribution of benefits

  • AI will likely displace some jobs, especially routine tasks, while also creating new opportunities that require updated skills
  • Ensuring AI benefits are widely distributed rather than concentrated among a privileged few may necessitate expanding social safety nets, enabling job mobility and retraining, and preventing AI from locking people into narrow job categories

Episode Participants

Dr. Anthony Scriffignano is an internationally recognized data scientist with experience spanning over 40 years in multiple industries and enterprise domains. Scriffignano has extensive background in advanced anomaly detection, computational linguistics and advanced inferential methods, leveraging that background as primary inventor on multiple patents worldwide. He also has extensive experience with various boards and advisory groups.

Scriffignano was recognized as the U.S. Chief Data Officer of the Year 2018 by the CDO Club, the world's largest community of C-suite digital and data leaders. He is a Distinguished Fellow with The Stimson Center, a nonprofit, nonpartisan Washington, D.C. think tank that aims to enhance international peace and security through analysis and outreach.. He is a member of the OECD Network of Experts on AI working group on implementing Trustworthy AI, focused on benefiting people and planet.

Dr. David A. Bray is both a Distinguished Fellow and co-chair of the Alfred Lee Loomis Innovation Council at the non-partisan Henry L. Stimson Center. He is also a non-resident Distinguished Fellow with the Business Executives for National Security, and a CEO and transformation leader for different “under the radar” tech and data ventures seeking to get started in novel situations. He is Principal at LeadDoAdapt Ventures and has served in a variety of leadership roles in turbulent environments, including bioterrorism preparedness and response from 2000-2005. Dr. Bray previously was the Executive Director for a bipartisan National Commission on R&D, provided non-partisan leadership as a federal agency Senior Executive, worked with the U.S. Navy and Marines on improving organizational adaptability, and aided U.S. Special Operation Command’s J5 Directorate on the challenges of countering disinformation online. He has received both the Joint Civilian Service Commendation Award and the National Intelligence Exceptional Achievement Medal. David accepted a leadership role in December 2019 to direct the successful bipartisan Commission on the Geopolitical Impacts of New Technologies and Data that included Senator Mark Warner, Senator Rob Portman, Rep. Suzan DelBene, and Rep. Michael McCaul. From 2017 to the start of 2020, David also served as Executive Director for the People-Centered Internet coalition Chaired by Internet co-originator Vint Cerf and was named a Senior Fellow with the Institute for Human-Machine Cognition starting in 2018. Business Insider named him one of the top “24 Americans Who Are Changing the World” under 40 and he was named a Young Global Leader by the World Economic Forum. For twelve different startups, he has served as President, CEO, Chief Strategy Officer, and Strategic Advisor roles.

Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.

Transcript

Michael Krigsman: Welcome to episode 825 of CXOTalk. We're discussing how to lead practical, trustworthy, ethical AI. Our guests are David Bray and Anthony Scriffignano, who are longtime friends of CXOTalk. And with that, gentlemen, welcome to CXOTalk. It's great to see you both.

Anthony Scriffignano:  Thank you. Likewise. Absolutely.

David Bray: It's great to be here, Michael.

Michael Krigsman: David, would you like to give us a brief introduction to your work?

David Bray: The short version is I've had the fortune of being at the cusp of things that used to only be possible either by the Department of Defense or the US intelligence community becoming commercialized. Everything from small satellites, tackling counter bioterrorism. Then dealing with the challenges of disinformation. also dealing with AI before it really caught on. I know we did a lot of CXOTalks back in 2016 and 2017 with that, Michael.

And now just trying to help societies figure out how they want to use data and tech to uplift lives.

Michael Krigsman: Anthony Scriffignano, please tell us about your work.

Anthony Scriffignano: I've been involved with AI and data science before. Those terms were popular more than 40 years in in things related to computer and information sciences, digital transformation, big data before it was big. and I've done a lot of work in terms of developing new capabilities in identity resolution, geospatial inference, something called veracity adjudication, drug checking, the truthiness of things, multilingual semantic disambiguation, which is figuring out when things in different languages are likely to be talking about the same topic.

And lately, I'm very focused on the effects of disruption and the disruption of human response to disruption, and how that creates both opportunity and ominous new types of risk we don't have names for yet.

Michael Krigsman: We're talking about this complex topic of trustworthy AI. We sometimes hear the terms ethical AI, responsible AI. Why is this such a crucial topic at this moment in time?

Anthony Scriffignano: We need some new words, right? So there's no intelligence in artificial intelligence. It's a bunch of math. but that math is starting to do things that look pretty intelligent to human beings. And so it's really important that as our devices and our the services that are available to us start to get closer and closer to behaving like they're intelligent and we don't get overly reliant on it, that we understand when we're dealing with machines, or at least that we understand that we might be, that we are understanding the impact of information that may not have been available to those machines when they made those decisions.

That we understand how we might be manipulated by the advice of machines. It is impossible to get through your day today. Maybe if you live off the grid completely, but virtually impossible to get through your day today without having a recommendation engine or some sort of algorithm influence the things you're seeing. The thing, the options that you have, and ultimately the ways in which people can take advantage of you're not aware of these sorts of things.

So now more than ever, it's important to start getting in front of this before we get to a point where we lost that opportunity.

David Bray: And it's been a long time coming. dates back to the mid 1950s in some respects, whenever anyone talks about AI, it's always worth to say what flavor or what version of AI, because it's almost equivalent to saying, all fruits and vegetables. And it's like, well, which fruit are you talking about? That's a in some respects, I think a lot of the questions about trustworthiness, as Anthony mentioned, come because of what's now possible, the sheer scope and scale and interconnectedness of things.

And at the same time, I think in some respects AI is kind of a proxy for a broader question, which is in a society in which we now have 45 billion or more networked devices on the planet, plus all this data exhaust that's being created, plus AI, how do we want to live? What's a what is a ethical, what is a just society?

In some respects, AI is a proxy because a lot of the things that are being raised about questions of bias or fairness, they exist. Even if you don't use AI in your, your work. and at the same time, I think we're at a case where most people nowadays, well, at least be willing to use a GPS to route them somewhere without putting too much question.

And so, it is a question of broadly, how do we make sure humans aren't negatively or disproportionately, impacted as these technologies are rolled out? How do we make sure that they're trying to uplift communities in a way that is consistent with how individuals and communities want to have the technology uplifts them?

Michael Krigsman: David, you just mentioned as a premise that the goal of AI is to uplift communities, but that's not everybody's goal and does that in a way get to the heart of the problem, the fact that we have conflicting agendas.

David Bray: 100% and you could replace the word AI with organization. And so all these questions about can we trust organizations and can we trust that they are going to behave ethically and and trustworthy in a responsible fashion? That absolutely applies. And as you as you sort of hinted at in your comment there, Michael, the question is whether they be governments, whether they be for profit, whether they be non-state series of actors, you know, regardless of whether it's them using AI or not.

What are the ways that we can assess if they're benevolent, if they're competent and operating with integrity? Because I often try to unpack what do we mean by trustworthiness? And it's been shown that benevolence, competence and integrity are part of those perceived actions that lead to trustworthiness.

Anthony Scriffignano: I would add that benevolence and competency and terms like that are also very relative terms. So what you may see as benevolent, if I am disenfranchised by your actions because they're for the greater good, I may not see as benevolent. So, we all are faced with these dichotomies every single day, depending on how old you are and whether you're a digital native or a digital immigrant, how much you care about privacy.

You may want your devices to be highly optimized to anticipate your every need, and then others want privacy. Well, those are opposite things to want. Because if you want that thing to behave like you want it to behave, then it's got to watch you and keep track of how you normally behave in order to help you like that.

And that can be seen as an invasion of privacy. So I don't see these things as binary. Do you want privacy or not? You want customization or not. And there are a lot of the debate comes when we polarize ourselves like that. Are you for or against data privacy? It's it's a false question. I think of it more like a pendulum where it can swing between extremes and where it is at any given moment should be there intentionally, and you should be aware of where it is, which you shouldn't be aware that it can move.

Are we willing to give up some privacy if our life and limb are at risk? Yeah. You know, you want to see my medical records about it, right? you want to use my medical records to figure out how much to charge me for my insurance? I don't know, maybe. You know, I, you know, I we can move that pension.

Michael Krigsman: There are a variety of frameworks that have been published by organizations, by the US government, by governments in other countries. How can corporate technology and business leaders navigate this huge ethical set of conundrums?

Anthony Scriffignano: I think also, if you change regulatory ethical to regulatory, it's a very different question. So, you know, ethically, companies will have a certain ethos. They're always trying to balance the benefit. They don't the shareholder of their public company, they're the benefit of the shareholder, the customer and the employee. There are, you know, you can't do all of those things at once.

In many cases, if you do something that's better, for one, it's it. Move the other one away from some goal. Or at least they're perceived that way. So, you know, they're always faced with that. now you throw in regulation and in the context of AI, David and I were chatting before this call. It's hard to count. but I think we can agree that there are dozens of countries that already have regulations in force, that affect AI, even though we haven't clearly defined where they are, is we're regulating AI.

Well, innovation will always be out in front of regulation. So I'll, you know, there's an AI toothbrush out there. I don't know, is that AI? There's a lot of things that are being called AI right now because it's kind of cool. And Dibrugarh, well, when you start bragging about your AI, you sort of opening yourself up to all this AI regulation.

And some of it is incredibly difficult, if not impossible to simultaneously comply with if you want to operate globally because one country might value national defense over the growth of the economy, another country might value the privacy of the individual less than the transparency in making a decision. Right. Those are decisions that regulators have to face when they write these regulations.

They all want the same thing and only all work on the same first principles. You want your AI to work this out? Well, your AI is a set of I almost said rules. It's not even rules anymore. They're at best they're collections of rules and heuristics and, you know, trying to get a system to work in a certain way when it can be manipulated, when the data can change and change its behavior is something called non-determinism that comes into play when multiple methods are used simultaneously.

It's not as simple as we all want to make it where we say, we'll just update the rules. these regulations are clear in implementation. The regulations are not clear. And then the global implementation is arguably impossible. Time in place to comply with. Other than that, it's pretty straightforward. 

David Bray: If you think of AI as being fast organizations or organizations as being slow, I in some respects, we already have a lot of laws on the books that already apply to ensuring, ideally, better outcomes. Either they're either cut that are country specific or region specific. And the challenge is, is it's just the speed at which I can either do things or go off the rails.

That is what is worrying people. not to mention it's it's just rocking that it's a lot of it is dependent on how good the data is. And I think we need to unlearn the lessons of the past, which was, you know, in some respects, I can't believe there was a meme that came out. You know, in the early 20 tens, that data was the new oil.

And I'm like, wait, oil? Use it. It's gone. Data use it, it's still there. And if anything, the more people that use it, especially if they are involved as stakeholders, the better it is refined, a signal, the important parts, they actually create more data. That data itself is useful in its use. So we may be, you know, in our rush to focus on AI and I ethics and laws and regulations, we may be missing the fact that actually it's really about the data.

And I don't want to use the word data ownership, because that's not the case at all. You take a photo, who owns it is very murky, but it's about can you think about data stakeholder ism in which you can have multiple stakeholders. And it's about how does that how does that shape the outcomes in which whether you use AI or some flavor of AI or not, how does it shape the outcome based?

And then the last thing I would say is it is challenging governments, especially governments that are more free and open, to begin to prescribe things that are more outcome based as opposed to explicit. Thou shalt or thou shalt not. And I think that's a challenge because outcome based prescriptive approaches, while there are definitely going to be what we need to do, given to just the sheer rate of change in data and technology in the world, it's not traditionally how free and open societies have written their laws.

Anthony Scriffignano: I'm in violent agreement with you. there's a couple of things you said that I would I'd a little pain in, we have lots of laws on the books or human behavior. Let's take agency. Right. If I hire somebody to deliver dynamite for me and following my instructions, they trip and fall and blow up your building.

You know, I'm. I'm part of that chain of people that you're going to be having a word with, right? digital agency, not so much. You know, it's like bot speaks to you in a way that you consider offensive or doesn't consider your resume in that hiring practice. the loss of not quite up the bat yet. They're trying to catch up with that, but they have not.

So, to some extent, people can use AI to do things that they can't do themselves, and that's dangerous. And that is absolutely happening in digital advertising. That is absolutely happening in some of the worlds that we don't want to be talking about on this call. and then, in terms of the outcomes, you know, some of the, the principles that are out there, some of the guidelines and frameworks which are very outcome focused, you know, we will, I'll just pick on, you know, the earlier you mentioned, the 1950s are the early attempts to define AI.

They started with things like, project human intent and, serve humans. Right. Those are outcomes. if we can at least say that this particular AI is not projecting human intent, it's doing that the opposite, then we can understand that it's doing something different. Maybe the humans intent is wrong.

This happened a lot in in institutional investing. You know, they use these really complex algorithms and quant guys and they come up with these absolutely, insanely complex systems that try and do a little bit better than the other guy in predicting whether the thing is going to go up or the thing is going to go down. And we got to a point where the humans who were very good at that, if they took a position that was different then the algorithms, the humans had to explain themselves where the algorithms didn't because nothing caught up with that yet.

And I'm not sure I'd want to be the person getting fired because the algorithm disagreed with me too much. Right?

Michael Krigsman: Subscribe to our newsletter and subscribe to the CXOTalk YouTube channel. Check out CXOTalk.com. Because we have incredible shows coming up. We have a couple of questions from Twitter. Let's jump there because I think they get to the heart of some of the issues that you're both raising. The first one is from Arsalan Khan and he asks who decides what is ethical?

And you both touched on this earlier. for some, your version of ethics might seem like you're creating gatekeepers to keep others away and who make sure that these gatekeepers are also ethical. And I think this also loops into what Anthony was just discussing. David, any thoughts on this particular issue of who decides what's ethical when it comes to AI.

David Bray: Given that we've got 3000 plus years of human philosophy that has not been able to reach a universal answer, I'm not sure we will do that for AI, but I think arsalan is asking a good question, and it's worth teasing out. Ethics actually are socially defined. Morals is what your own internal compass is, and so it's worth teasing out.

There have been examples in human history in which things at the time were considered ethical, that we in 2024 would look back at history and say, that was definitely unethical. even today, there may be cases where if you are a doctor in certain parts of the world, you may feel like if someone has for stage cancer and is near end of life, you may morally feel like it might be right for them to decide how they want to die with dignity.

But depending on where you live in a world, there may be ethics and laws that say you cannot assist to do that. And so your ethics and your morals may conflict. And so I would answer, arsalan's question first is your morals should be your own moral compass. Now, as for the ethics, they are socially defined and that's what makes maybe ethics of AI so complicated is it's tied to the not just the nation you live in, but also the world as a whole.

Anthony mentioned that there is more than, you know, at least three dozen different, regulatory regimes out there for AI. give a shout out to, professor JP Singh at George Mason and others who actually did a study in August of last year. There are at least 54. And they actually they have overlap, but they also have very divergent.

And at least in August of 2023, nobody had actually weighed into the ethics of AI in health care despite having a pandemic recently. And so I think the challenge is, is, well, we want to be a world that's interconnected and we want to actually have cross-border flows, cross-border commerce and trade, the challenges. And I'm not sure the odds of getting every nation to agree to one ethical, let alone legal standard for AI is going to happen anytime soon.

And that's probably why this is so complicated, is because ethics is tied to the laws of your nation, and ultimately how you interact with the world.

Anthony Scriffignano: And to tell one quick thing to that brilliant discourse. I thank you for that, David. I'm in context matter as well. So, it's not like you can make a decision and put a stamp on something and say that was ethical. Because we learn as human beings, we learn and also context changes. And so it may be that something was considered ethical until we understood the plight of some marginalized other that we didn't know about.

Or it might be that something was considered ethical because we didn't have the ability to break it apart and understand the unknown meant need, how something was being used in a way that we hadn't intended.

The best example I can think of right now is is, Nobel and dynamite, you know, construction. Yeah. Bombs. Right. so, you know, it's it's possible that there is an element it's certain to get back to, to our science question. It is certain that no good answer to that question exists. That doesn't include constant reevaluation of how things might have changed over time, or how context might have changed.

David Bray: One very specific example of ethics changing over time is, back in World War one, the British thought that submarines were unethical forms of combat because they were underneath the water. But then when World War Two came, when their convoys were at risk because of that blind spot, and then two, they quickly changed their position. 

And so now all and while not every question about ethics involving AI will have national implications, we need to recognize that certain choices we make may put either members of society at risk or nations at risk if they choose a path that later needs to be updated because the world has changed in a fundamental way.

Michael Krigsman: We do have a question from Chris Peterson, who again, is touching on this complexity issue or the balance of regulation versus inhibiting innovation? And he goes back to the example that you were speaking about earlier. He says, of course, today, how many people would be lost literally trying to use a map and compass. And so he asks, when it comes to regulation, what are the risks of muting the natural intelligence that may grow, by through this AI technology and therefore drive innovation down?

And how do we balance that with the regulatory efforts?

Anthony Scriffignano: This is exactly the risk of overregulating is that you get to a point where some good it could have happened and comes outside the pole, all because of that regulation. I don't want to get into specifics, but, the thing that the pulse oximeters in, in the, onto, say, smartphones, and that example comes to mind, GPS there were at the time, you know, that was not a, a public thing.

You know, those signals were not available to the public. and then all of a sudden, somebody had to decide, you know, the public can benefit from this. And somebody else was saying, yeah, but what happens when we get invaded and the enemy can find their way around our territory? Well, okay, we'll put an error into it and we'll make the error random.

You make error random things telling you to turn where you shouldn't turn, and people going down one way streets and you know it. It takes time for these things to balance. I don't I don't think they ever completely do. I think that if we tell ourselves the truth, this this juxtaposition between what we can do and what we should do and what we may do will always be there.

And that's the essence of what makes this a valuable thing to pay attention to.

David Bray: If you remember when GDPR, rolled out in Europe, which was, you know, very well intended on Europe's part, but the challenge was they said you need to protect health care data, but they didn't define what health care data was. And I, I had conversation, I'm sure Anthony did too. If others saying, well, how are you going to define this?

Is it my height, my weight, is it my name? And they said the next 10 to 15 years of court cases. But that has a chilling effect because then do you want to be an innovative data health care startup in Europe? If the risk is whatever choices you make as a startup is now going to be fodder for a health care, exposé as you move down the road.

So, I raise that because in some respects, it's not just the choices you make in that moment, it's how you're going to further refine whatever you put out there as an approach. And it doesn't mean you shouldn't put something out, but you need to actually, maybe in some respects have a more agile and adaptive approach to realize that it might actually inhibit, as Chris was asking, innovation, the other thing that I would say is that there's also if we go up to the balcony, there's a broader thing that's happening here, which is all these different technologies, from AI to synthetic biology to quantum computing and things like that.

They are super empowering organizations and people to do things that could only be possible 40 or 50 years ago, dare I say, by the CIA and the KGB. And so this is raising fundamental questions, which is how comfortable are we with people being super empowered through AI and other convergent technologies? And how do we live in a world in which now you can do things such as, yes, you can do wonderful things with AI, you can save lives, or you can you can, figure out ways to actually solve really hard problems involving the climate or your local region.

But you can also generate very quickly massive amounts of deep fakes or really, or things that damage a company's brand or a person's reputation or things like that that aren't true. And so we haven't figured out the balance of it, dealing with the fact that these in some respects, super empower people to do things that only powerful nations could do 40 or 50 years ago.

And two, in some respects, it's not just one issue. There's 40 or 50 watermelons when most governments can only deal with 2 to 3 watermelon size issues at a time. And so I would submit what we really need is to find places, whether they be universities, whether they be nonprofits that don't have a sole governmental interests, that don't have a sole for profit interests, but can actually, with the community saying, okay, do it in a bipartisan or a nonpartizan way, try to at least experiment and involve for profit participants.

Government participants, members of the public to navigate this space. Because honestly, the only way I think we're really going to do this is by learning, by doing. I think any any regulation you attempt to do is always going to be out of date within a month that it's rolled out. It's not already, but you got to do learning by doing.

Anthony Scriffignano: Places in the world where they have created, safe spaces where you can use data and you can innovate. And the regulations kind of don't apply when you're working in the context of, for this amount of time in this space while we're watching you kind of thing. those are interesting first steps toward what they're talking about. I don't want to start naming countries, but they're actually other countries than the ones that we're speaking, right now.

So, one could argue maybe they're a little bit more intense in that regard. I want to be I want to put a word of caution in here. You mentioned quantum computing before, David. There's certain things that are going to happen whether we pay attention to them or not. We're a whole bunch of data that used to be safely encrypted isn't so safely encrypted anymore, and it's already been acquired.

So, it's not like you're going to stop something from getting to that data. They already have it. They just can't read it right now. And so there are going to be shifts in the nature of what's possible that will happen just because of technology advance. And we're going to wake up and say, well, why can't we do anything about this?

You can't do anything about this because you didn't think about, you know, quantum resiliency in that case, or why you've there is no there is no credible argument that you continue to react and be relevant in this field. You have to try to some extent, guess where the park is going and put some of your effort there, or you will absolutely become irrelevant.

Michael Krigsman: We have an interesting question from LinkedIn, and this is from deacon W, who is a student at Montclair State University, Duncan Whittaker, and he says, how do you feel about learning in the future with AI? Do you think I will help or harm students getting into fields such as computer science and information technology? And how will computer science and I.T evolve as a result of AI?

Anthony Scriffignano: The first thing is you have to look at it as something that can help and not a crutch. Not a way to get away with doing less, not a way to get around. I think this is a challenge for students and it's a challenge for teachers. Teachers know that students are going to try to use generative AI to produce the answer to the question.

So now we have to ask the question animals, how you think, how you feel that there's there's more emphasis on critical thinking. Well, those questions are a lot harder to ask, and they're a lot harder to grade equally among all the students. And and the technology is getting better at faking that a person thought about it. So, you know, if you're using the technology just as a shortcut, then I think you're going to kind of get what you deserve because you'll become slowly commoditized and like everybody else, because that's what those technologies do.

If you're using it to help you do your research. David, let's just do an experiment. Raise your hand if you did any research using generative AI before today's discussion, of course, write a question we did right, but are we reading from a script that was generated by something? No, of course not. So you got to be able to use your brain and you got to be able to put your own thinking on your own.

You only need to think behind it at the end of the day, your fingerprints and your teaching. What did you touch? What did you build? What did you make? What did you learn and what did you teach other people? Nothing else matters. And whatever technology used to do that, great, have a nice day. Mean you probably have a different opinion.

David Bray: I was going to first address this question. Is this going to help or hurt students? And I was going to say yes.

Anthony Scriffignano: Yes, yes, yes, yes, yes it's going to do work.

David Bray: But, but, but you know, to sort of dive a little bit deeper in some respects we've always had technological advancements that both improve of what we can do and how we need to learn. But I would say, how many of us now know how we use a sundial to discern what time it is? You know, that's not something you know, you probably do because you're you are the the, the the the version of the eternal Boy Scout.

But most people probably wouldn't. And I would also say right now think going back to the GPS example, there probably is if GPS was somehow taken off the board for who knows, heaven forbid, a tactical nuke in space. How many people, that that have one maps in their cars and two would be able to figure out where to go if they didn't have GPS as a way to figure out where their next meeting is, that they were meeting with someplace they didn't normally go.

And so in some respects, society moves forward. We we do collectively lose some knowledge that if, if, if ever a bad day happen, we'd have to go back to. But at the same time, we're also able to tackle, as Anthony was mentioning, more interesting, more challenging, more complex questions of what is, what is scientific progress and what is advancement.

And so, I would say, you know, specific to students in computer science, computers and information technology and informatics, I think I would make sure that we, we, we, we both deal with making sure they understand the fundamentals regardless of whether we're using AI or not. But then I would quickly put on the gas and say, okay, now that AI is out there, how do you best work in human AI teaming?

How do you work in groups? How does it make you do the work better? When does it fail? What is the sensitivity? one of the things that I think has not been answered yet, but I know there are people out there and I want to give a shout out, to some people at Princeton. But there's other places, too.

As you add more and more synthetic data, data that's been generated by a machine to your analysis, what is the sensitivity of the conclusions reached by the AI? And does it depend on the algorithms that the AI is choosing to use? That's something we don't know. and in fact, that may actually turn out to be a blind spot.

Yeah. There is some anecdotes that some algorithms are vulnerable to what's called self cannibalization, which is if you feed data that has been produced by the algorithm back to itself, that can cause the algorithm to go offline or go off the rails. Again, that may not be true for every algorithm, and it may also be a sensitivity analysis once it tips past a certain point.

But I would say where I see studies of computer science and algorithm and things going is understanding. First, how do you pair and how do you team humans with it in a better way? But also when does it break and when is it more resilient?

Anthony Scriffignano: I watched a video recently where someone was doing a job interview, and the point of view of the video was over the shoulder of the person doing meaner. So you see on their screen the person interviewing them, asking them a question. And it was one of these classic interview questions about how would you estimate how many, how many women drink coffee and parrots or something like that, and they don't care about the answer.

They care about your thinking and how you approach the problem, and you watch the person put into the tool that they're using. How would I estimate how many women drink coffee in Paris? And then you watch, you see the generative AI sort of working out the answer on the screen. And the person during the interview is saying, let me think about that question and carefully consider it while the answer is formulating.

And then as he answers on the screen, they're sort of paraphrasing it and spouting it back. That's cheating. That is not using the tool to help you think, better that I am sorry that is over the line and and maybe down the road this will you know, somebody will save this clip and go, oh, he thought I was cheating right now.

Yes. In the context of, of, you know, February 23rd, 2024, I had to look at a machine. Tell me the date. you know, I think that's cheating. There you go. Right. but but I think that, you know, to some extent, you know, you know, you don't always know, but that is nowhere near the line, right?

You knew that what you would, because if you didn't, and then you wouldn't have put that on there and said, look how clever I was. Right?

Michael Krigsman: We have a question from Elizabeth Shaw, who says, given the uncertainties around regulation, the ambiguities around governments, the lack of clarity around the ethical issues that you have raised, what is trustworthy or ethical or responsible? I and what are the outcomes?

David Bray: Maybe it's not specific to the technology is how you deploy it and how you roll it out. That trustworthiness would be engaged in whether they be your stakeholders, your customers, your citizens, whoever the community is that you're engaging in, you say, look, here's what we're planning to do. Here's how it works. Here's how we're going to put in place a mechanism to always be listening and always be learning, because we realize there may be, you know, non-deterministic effects that arise.

But I would say in some respects, we need to actually shift the conversation from being trustworthy in the machine to trustworthy in the broader socio technical rollout of whatever the activity is. And that includes the data. And then again, I would go back to, are you thinking about, how you are doing your best to work to build bridges that are benevolent?

As Anthony already mentioned, benevolent see can be perceived, and it may be that you intended it to be benevolent and someone else sees it differently. How are you doing and involving competency and how you're involving integrity? 

I'm a big believer that we actually need to have more involvement of the communities that are experiencing the AI, that we're doing AI with people, as opposed to two people and so that may be more whether you call them focus groups, if you're in a business setting or there's councils in which you're involving people that, you know, citizen juries I know is actually done in Australia, where you're regularly going back to people and saying, here's what we plan to do, here's what we're doing, here's how it's going, and allow people to actually say, have you thought about this or have you thought about this? Have you thought about this.

Anthony Scriffignano: Trustworthy AI, the, frameworks for this. So it's not a completely ethereal thing where, you know, we have to do a lot of hand-waving to describe it. The OECD has some principles that are very well documented. there are, as you mentioned, David mentioned the work of JP Singh. There's lots of, places where you can go to say, what are the frameworks that are out there?

What are the, the, the, standards that I can hold myself to. So you don't have to start with a blank piece of paper or a focus group, you can at least start with these frameworks, and then you can decide maybe in a healthcare setting, more of this, less of that, maybe in a manufacture setting, less of that, more of this. But there is a place.

Michael Krigsman: This is from bio cosmology on Twitter. I love, love that name and bio cosmology says should we support initiatives like, ban deep fake open letter? Is this still realistic? And again, I will just interject my comment that, you both are making the assumption that I, the AI is designed to build a better world and. No, no, no. Okay, okay, okay, I have an angle.

David Bray: Say this glass that I have here in my hand. Sadly, I could break it and I could actually make it a deadly weapon. so everything in the world can be dual use or multi use and used poorly. And I definitely can be used to hurt people. I think the open letter, you know, I'm, I actually have signed it.

So, I would say the open letter is a start, but it is simply that it is a start. You need to do much more than that in terms of you actually need to not just say, I want someone to do something about it. The best way to predict the future is actually to start doing something yourself on it.

And so we need to figure out how do we get ahead of the curve, how do we get ahead of the, left of boom, in other words, before bad things happen to people? because unfortunately, yes, generative AI can do very realistic looking, inauthentic human content that can hurt people. it can damage reputations. and this is not something new.

You know, before generative AI, there were tons of bots that were also used to actually harm people. And, you know, before we heard about Taylor Swift, unfortunately, having deep fakes, there were plenty of people that were also having problems too. And so I would say it, it's not a question of support or not support. Is it realistic?

It is a question of what are the other actions that are going to follow. And it doesn't have to be one monolithic action. If anything, I'm a big believer that it can actually be small actions that individual communities and people do that build up to larger actions. but, I guess the short answer I would be is it is it is a activity, but it is definitely not sufficient for the future.

Michael Krigsman: Anthony, we have another question from Twitter. This is from Shelly Lucas. And she says, how is AI governance different from typical data governance approaches? And how does gen AI, for example, impact this.

Anthony Scriffignano: Data governance is is is much more discrete. There are rules for discovery, curation, synthesis, fabrication, delivery of data. There are guidelines like David mentioned GDPR, there's data privacy access, all kinds of regulatory framework. There are people who have certifications and degrees in this and you can study it. There are books written about it. So it's it's like the traffic code.

You know, we can define good driving. We can define safe driving, and we can argue about it. But in general, really clear as you move up that continuum that was in that question, it gets more and more foggy, more and more non-deterministic, more and more relativistic, and more and more situational. So when you get into AI and you leave the word generative out of it.

So let me just put a footnote for one second. For those that might not be following, generative AI is AI that produces its own content as opposed to just studying your content. That's where the word generative comes from. So if we just talk about AI and we think about, supervised and unsupervised, so like sucking up a lot of data about the past and predicting what might happen in the future, that's supervised AI, looking at a whole bunch of unstructured things and kind of organizing it and drawing conclusions.

That's a gross oversimplification of unsupervised AI and then everything else is somewhere in between. Harder to govern that. But you can say, well, did you use any data that contained gender or race? Did you, when you produced your conclusions, did you have and, degrees of freedom for the variability and this known constant that changes? Yeah. There are ways to to audit that sort of thing incompletely. But you can govern it and you can have model governance. You can do things like regression testing. You can do things like, backtesting, heuristic evaluation. You can get a bunch of similarly incented, similarly instructed people and ask them to look at the results and draw conclusions on it and vote. That's how the Olympics works. You know, when you kind of score a performance.

I kind of think so. You could do that with AI as you move up into generative AI. It's kind of the Wild West right now because the AI itself is producing content. If you're using AI to judge the, the, the, the efficacy of the content, it's going to say that it's really good because it produces it. If you do something adversarial, where you create one system to govern and the other system to produce, by definition, the system that's governing the samples of what it's intended to cover.

And so there's that sort of, self-consuming, you know, self-licking ice cream problem that you're creating that, I don't think we've completely solved that one yet. And so as you get to the top end of that continuum, it gets more and more squishy and more and more open to malfeasance and manipulation and, and, arbitrage and all those sorts of first mover advantage or disadvantage depending on what side of that you're on. Short end of that data. Pretty clear generative AI. Good luck. I hope you're taking notes on what you're doing because you're going to have to defend it later once we understand it better.

Michael Krigsman: Hue Hoang on LinkedIn says, how do you how do you make decisions around what data should be? May should be unavailable for consumption, where that information may no longer have meaningful value, but can possibly lead to misinformation or disinformation, or I think another way of saying it is how do you decide what data to hold back? Look at open AI, right?

This this this new saw, video. Have you seen this? Unbelievable. Those of those videos. I don't think they're going to release it to us anytime soon because of the damage. And so how do you make that decision?

David Bray: If it was data that obviously is illegal or things like that, or it was data that, is explicit or things like that, I would definitely pull that away from the machine being trained on it doesn't mean that some bad actor is not going to use it. But if you're operating as an ethical organization, there's certain data that you just cannot use.

I think the challenge, though is and Ansible will probably reinforce this as well as you selectively choose to remove data you're introducing biases into the generative AI's way of perceiving the world and thinking. And so that itself is having second and third and even unintended effects. And so if anything, the best way to sort of monitoring systems is actually am I okay with what I'm seeing in terms of the outputs.

But again, the challenge is, is generative AI is not promising. It may be consistent for the next thousand answers. And then when it gets answer number 1001, it goes off the rails and you're uncomfortable. So you always have to be learning. I think this points to a deeper thing in which Anthony was saying, which is deep learning is only as good if the present and future, whatever you're presenting to it, is embodied in the past, but it's not really good if the present and future change in ways that are not embodied in the past data set.

And so this may be actually us discovering a limitation of deep learning. And now it's great for a lot of things, but just like other expert systems in decision support systems and natural language processing, there may come other things. Some say it's active inference. It may or may not be that there may be other things that roll out there, but I think we will discover there are there are brittleness and things that break.

Personally, if I'm in a world that is regularly changing, deep learning may not be my go to tool that I want to reach for. The last thing I would guess I would say is this is why, again, I think it's going to be part of the governance of AI. The only way to practically do it is you're going to have to have oversight boards that are constantly looking, constantly learning, but doing so in a way that's saying, am I comfortable with these outputs as how they're being rolled out?

Again, because ethics is socially defined. you know, there were things that back in the 1950s that we would not find ethical now. And I have no doubt that come 20, 35, there's going to be things that they're going to look back and say, how do they ever allow that to happen? Or how do they not allow that to happen in 2024?

And so the two things, I guess I would say is just like we have had technologies that democratize what people can do, we need to actually democratize two of the things, which is the ability for people to exercise what I call information discernment, because it's going to get harder and harder in the next months. If not, you know, in the next years or months to discern what is authentic versus inauthentic online precisely because of these tools.

The second thing that I would say is we need to roll out what I would call digital dignity, and that still has to be scoped out. But it's you having a choice with how is my persona, how is my identity, how is my brand, treated in a digital domain?

Michael Krigsman: Each one of these questions, I realize that we could have a lengthy discussion. These are all very excellent thought-provoking questions, but they're.

David Bray: Not easy questions.

Michael Krigsman: This is definitely hard questions. and we know we.

Anthony Scriffignano: Know who you all are. I'm sorry.

Michael Krigsman: Yeah, exactly. And we know who you all are. Okay, so here's a question, Anthony from Greg Walters on LinkedIn who says, how do you feel about AI reaching the point where data is considered historic and irrelevant, replaced by the real-world, real-time inputs? and I think this gets right back to what, David was just talking about.

Anthony Scriffignano: It is the core of the research that I'm doing right now. So I'll give you the analogy, think about what happened to the world when Covid started to happen. When it started to happen, we didn't have clarity on what was going on. And there were some brilliant people quickly figuring out what was going on, using what I would call supervised, rapid, supervised learning about epidemiology, about the localization, about, you know, curating all of the effects and doing what, David, just said, collecting longitudinal data, but very quickly to try and figure out what are the trends and what's happening and what's the efficacy of certain things and so forth.

All right. All of that starts happening at the same time. We're generating a tremendous amount of data on what happens when you turn off all the airplanes, what happens when you close all the small businesses, right? So some people and I was part of that was we're collecting some of that information to study longer term problems that we know we're going to have.

Right in the middle of all of that, a ship gets stuck in the middle of the Suez Canal. I don't care what you were doing analytically, you were not thinking about that. And so the supply chains got disrupted, and some of those supply chains had something to do with Covid and had something to do with emergency response. Some of it had to do with light bulbs.

So that sort of disrupted disruption is at the essence of where I struggle right now. There are methodologies, Bayesian methods. There are methods that involve basically taking a small step forward and then regressing again very quickly, kind of what we think the brain does a little bit, but not really what the brain does, because the brain works in a very different way.

Nobody listening to me right now is thinking about how their shoes feel now you all are, because I said that, right? So we can redirect our attention. Very hard to do that with algorithms. So, there's definitely work to do in terms of building out new methods. I have developed some of them, but there are many other ways smarter people that have developed others, for looking at coalescing behavior, for looking at atypical behavior, or looking at anomalous behavior from looking at likely to be malfeasance or non-malfeasance, by looking at whether the intent seems to be consistent with the article about the intent, all kinds of methods.

I think that we will get way better at this much faster than we realize. And I hate to say this, but I'm going to say it when quantum computing becomes something that he can, you know, get easily, we're going to have to completely change how we think about this, because we'll be able to think about all these things at the same time.

So great time to be kind of, studying these disruptive disruption problems because they're going to get really important.

Michael Krigsman: We have a question from Arsalan Khan again, who says, how do you make sure that non-experts don't pose as experts because of gen AI? How do you even know that this is happening, or even that it's ethical and and and let me just let me just add one more point to this. I'm terrible at math, but I can use a calculator and you know it.

I can hide the calculator and I can astound you with my math abilities.

David Bray: Oh, I have no doubt. Yes, exactly. And before that, it was slide rules and, you know, and there probably were people that said that's cheating when, you know, just like Anthony's question of that's cheating, when do you use the calculator to like bypass doing logarithms?

Anthony Scriffignano: I said, let me do that math in my head and use the calculator. It's cheating if you just use the calculator because you know you're not good at math. It's not cheating, it's using the right tool.

David Bray: I agree. So I think the world we're going into, we've had this challenge before, believe it or not, a thousand years ago when now people would possibly live and die in a city that was different than when they were born in. How do you know that person that's claiming they're a lawyer, or they're claiming they're a doctor, or claiming they're an academic really has the know how.

In the end, the expertise, if you're not an academic yourself, and how do you know that whatever they're practicing, if you're practicing law or medicine, they're doing so in your best interests as opposed to solely their own interest. And so back a thousand years ago, this led to the professionalization of certain parts of society. If you look at it, medical hand, the Hippocratic Oath, but you also had the professional societies requiring those who joined a professional society to demonstrate both knowledge of but also experience in, because knowledge by itself does not mean you actually know how to do what you're doing.

And that's the one thing that we sometimes miss with generative AI. These machines might spit out what appears to be knowledge, but it doesn't mean they actually have experience in whatever topic that they're actually talking about. And then lastly, there was a Hippocratic oath that was applied by the society so that later, whether it was a it was, you know, if someone had a unfortunate medical outcome or there was a case where something went wrong with a legal proceeding, then the professionals themselves would adjudicate.

Did this professional act in the best interest of their client or their patient or not, because they were the only ones that had the expertise. Now, what's happened in the 80s and 90s, and especially with the internet, is we flattened things. In some respects, that expertise has actually become devalued because you can use your favorite search engine to your to look up information.

And I guarantee you will find information that supports your view. Doesn't mean it's valid information or you're interpreting it correctly or not. But in some respects, we've killed expertise. and in some respects there were smoke filled board rooms and things like that that didn't need to get blown up, but we've not figured out a way to actually then effectively have professionals self-police themselves.

And I don't think the answer is to have government do this, because then government would have to hire all the professionals and pay the loss of money, and even then they'd be out of date. And at the same time, it's not laissez faire, which is let the companies be companies. And so we may need to return to certified ethical AI.

We need to have certified ethical data in which the experts themselves adjudicate. When you roll out a system and something later goes boom, did you act with the best interests of the community, of the society? But something went wrong? Or did you do something that was unethical? And what is that? I mean, you can think about in terms of certified public accountants, you don't worry that the CPA, even if they get intense pressure from their boss, is going to cook the books.

If they do, that's illegal. And also they'll lose their connotation. Instead, they will try to represent the best interest. And so I think we have to figure out how do we navigate this. Because precisely, as you said, most people are not going to have the level of expertise to understand if they're being sold something that's legitimate or it's snake oil or vaporware.

But that also means that we need to start finding ways to make these things more explainable. People, even if they're not deep experts.

Michael Krigsman: This is from Wei Wang Anthony. And she says, where may we see eye causing lasting damage, where regulatory reform may put an end to certain technologies? Or let me let me rephrase that. And say we were talking earlier about, the regulatory environment. And where do you see regulation damaging the technology to the point where it may not be useful?

And let me add another aspect in which is the explainability that David was just talking about. where does explainability fit into all of this? And if you can answer very quickly, I'd appreciate it.

Anthony Scriffignano: Anytime you have systems that are trying to behave like human experts doing something, you're going to run this risk where if you overregulate, you tie their hands. So I'll give you an example, health care. You know, we want a lot of regulation around health care. If you look at regulatory intent, there's a lot of regulations there.

I think there should be at the same time. And we need to move faster with drug discovery. I think we need to move faster with, all sorts of things that affect human health. It shouldn't take years and years and years. some of the work that I've done around detecting, coalescing anomalous behavior can be applied to looking at digital X-rays and Cat scans.

I don't know how to do that, but I'm sure somebody else does. there's all kinds of regulations, obviously. Again, so, you know, you have to, think about accelerating the amount of time that, can go by before you get slowed down when you're doing something that there's no other way to do it. and I think that regulation can sometimes be sort of the dampening effect of, you know, yes, you slow down and you prevented.

I don't know, in this case, privacy from getting out of hand, which also prevented us from understanding the commonality, how people were treating Covid. And that's a problem. Right? So I think that we health care is the big one. And when I think about IoT and autonomous devices, these devices are going to have to learn to get better at discovering each other and talking to each other.

Do I want my car telling the car coming in the opposite direction what the traffic is like right behind me? I will wait, yeah, but there's probably reasons why we don't want that right now, a regulatory standpoint, and there's probably ways that that can be used to give me a speeding ticket or whatever. So, we haven't caught up to this yet, and I think that we will miss some intended good AI being overly careful.

And I think the largest area where that will happen is we have algorithms trying to behave like faster people doing the same thing.

Michael Krigsman: Shelby Lucas says how should liability issues be addressed in cases where AI systems contribute to clinical decisions that result in errors or adverse outcomes? In other words, who pays when the AI screws up in health care? Since we're just talking about health care, and Arsalan Khan says we know that AI is being used by insurance, banking, other industries, including health care, do these organizations have an ethical obligation to tell their customers that they are being judged by AI?

David Bray: This is the goes back to when we're talking about what is trustworthy. And I said actually, it's really the overall socio technical rollout. So I would say whoever is rolling out this system for the following purpose also takes on the responsible party and liability. And so if it's being rolled out by an insurance company, yes, the insurance company is responsible.

If it's being rolled out by a health care provider, they're responsible, and so that that's gets to Shelly's question in terms of do they have an obligation to tell us? I think given how generative AI is different than what came before? Yes. I mean, in the past, you were not this close that an expert system was being used or a decision support system was being used, but that was not producing generative effects.

The fact that generative effects can introduce nondeterminism and can do things that are not as deterministic and rule based as other approaches, I think, means that you should have the ability to either opt out or make it. Actually, I would say probably the more ethical approach would be, we are we are only going to include you in this if you opt in, and if you don't decide to, then you choose.

And that's a choice. And that actually would allow choice on the part of consumers. I think one of the things that whether you're a for profit, whether you're a government, whether you're a nonprofit, we're increasingly going to need to go to worlds of choice architectures where it's not a single answer for everybody, but it says, you know, if you're willing to provide no data, then that's your choice.

But you may not necessarily get the best care, because now each time we see we're going to have to try and collect data again, or we're going to have to deal with it if you give some data. But it's only when the doctor sees you, okay, that's your choice. And if you're willing to actually have your IoT devices that you wear constantly be streaming on digital data and you can get that quality of care, but we're not going to assume one size fits everybody.

It's going to be increasingly choice architectures. And then in terms of liability, it is the person. It is the institution that rolls out the broader social technical system. again, with adjudication ideally by professionals outside the company. If something goes wrong to say, was this just a bad outcome, or was there something that they did that was malfeasance?

Michael Krigsman: What about AI led job displacements where the recipient is a victim in a sense and has no choice?

Anthony Scriffignano: There are certainly places where I will displace humans in jobs, and I do not mean to sound insensitive to anyone who's in that situation, so let me say that right from the outset, every technological advance has had that challenge where now all of a sudden, because we have this technology, we don't need people there used to be rooms full of people doing calculations at banks.

We don't have those room full of people doing calculations at banks anymore. So I understand that that will happen. But I think also I should say, and I think also it will be tremendous opportunity for new types of employment if we keep our heads up and we continue to evaluate the skills that we have and the skills that we need, the skills that you had may not be the skills that you need going forward.

And so I think it's really important that we constantly think about this, obviously, in in terms of machines doing things where unskilled people could have done that same thing. That's going to be the biggest challenge because they were already unskilled. But if they were skilled people, then they have the opportunity to update those skills. I used to be a COBOL programmer right.

You know, you left David. It takes one to know one, right?

David Bray: So I know, I know, I'm glad I did it.

Anthony Scriffignano: but yeah, so did I. So, my point being that, you know, I don't know if I could write a program in COBOL anymore. I probably could if I looked at one. But, you know, it's not a super valuable skill anymore, although there are still people need to do that. but now there are people to to write convolutional algorithms and, and develop, you know, behavioral anomalies and so forth.

The job of a of a data science practitioner in the not too distant future will be very clinical. You'll have to evaluate a complex system that you can't completely understand, develop some sort of differential diagnosis, do some kind of an intervention, test the results of your intervention. That sounds a lot like being a doctor, right? Well, doctors have to constantly renew what they do, where when you renew what you do, take your head up, read what's going on, pay attention, keep learning and you'll be fine.

Michael Krigsman: David, how will I affect wealth creation and the distribution of wealth? And should companies contribute to a wider social safety net for those at risk?

David Bray: Absolutely miss. The short answer is it is going to have impacts. The short answer is, unfortunately, every time a new technology is rolled out, not only does it have what Anthony pointed out, which is unfortunately job displacement in some part happens, but usually there are very asymmetric where there's a few people that make a whole lot of wealth creation from it, and then a lopsided parallel curve.

And so I would say we're going to see this. And then that's partly gets to that. Why we're probably all talking about AI ethics in some respects is yes, we care about on an individual basis. But we also care about how this is going to shape the next decade. Are we going to see only a privileged few that are reaping most of the advantages in terms of wealth creation and power, or are we going to find ways to accelerate the the benefits to be spread from communities as a whole, especially because we're probably not just having one technological revolution, we're having multiple ones.

The last thing I would say real quick, just to build on what Anthony was saying about jobs, I worry right now algorithms are unintentionally forcing people to fit into certain job swim lanes at the very moment when the world is changing, such that you actually need to have mobility across different jobs. And that's actually an unintended effect of anyone who is using algorithms to scan resumes or do recruitment.

And so I think if anything, we need to blow up the model of how we hire and recruit people, because I want to look for people less in terms of what was the last job you did and more for your general capable. This ability to learn on the job, ability to work in different scenarios. Because I worry that algorithms right now, if we're not careful, will result in a caste system in which job mobility has actually been eroded as opposed to improved.

Michael Krigsman: Anthony, what advice would you offer to business leaders around these topics of managing this set of very difficult issues.

Anthony Scriffignano: The first thing is be humble. You cannot do this all by yourself. I don't care how smart you think you are. Expand that circle because you can't know all of this field. No one can. The second thing is keep your head up, because this field is constantly changing. And if you think that you're just going to get smart enough, check the box and move on to the next thing you're going to become irrelevant so quickly that you want to be able to even think about what happened.

And then the third piece of advice I would say is, you know, when we go to school, we learn to check our work, check your work, what has to be true in order for you to use that data, you use what has to be true in order for that method to be the right method, what has to be true in order for that belief system that you're counting on to be the right epistemology, the framework that you're using.

It always questioned how you came to the conclusions you came to. Don't just push the AI button and consume the answer.

Michael Krigsman: David, you're going to have the last word. What advice do you have for government policy leaders when it comes to managing this huge ball of wax?

David Bray: First and foremost, you're going to have to figure out new ways of rolling this out and doing policy. that policy, it used to be you could roll it out. Maybe in five years it would start to have an impact, and that impact would be felt for the next 15 years. The world is changing so fast. It's a red green problem that you know you cannot move at the speed that you're moving and still stay where you are.

You're going to have to figure out ways of learning by doing ways of creating the sandboxes and moving forward. The second thing is that means as a result, you probably need to find ways to partner with trusted institutions like nonprofits and universities that can then sort of take the role of experimentation and learning by doing with the for profit sector, with communities and across borders.

The third thing I would say is any policy you write includes sunset clauses or revisit clauses, because the worst thing that could happen is we have ossification, where we write policies that are dealing with AI in this point in time. And given the sheer rate of change come 2027 2026, we now find that we've actually bound ourselves with the ossification.

So we're going to need to do outcomes based policy that also includes sunset or revisit clauses.

Michael Krigsman: This has been a very thrilling and exciting discussion. I want to say an enormous thank you to David Bray and Anthony Griffin. Yano, thank you both for taking your time to be with us. I'm very grateful to you both networks.

David Bray: Thank you, Michael.

Michael Krigsman: And a huge thank you to everybody who was watching, and especially you folks who asked those excellent questions. You guys, you guys are such an intelligent, sophisticated audience and it's such a pleasure to interact with you. Before you go, subscribe to our newsletter and subscribe to the CXOTalk YouTube channel. Check out CXOTalk.com because we have incredible shows coming up.

Thanks so much everybody, and I hope you have a great day.

Published Date: Feb 23, 2024

Author: Michael Krigsman

Episode ID: 825