Discover the transformation of Generative AI with CXOTalk – from IBM Watson to ChatGPT, uncover the milestones in AI technology and its impact on natural language processing.
Explore the fascinating evolution of Generative AI from IBM Watson to OpenAI's amazing ChatGPT. Join this exclusive CXOTalk episode as we delve into the groundbreaking advancements and applications in AI and natural language processing. Our guest is Sol Rashidi, who returns to CXOTalk by popular demand.
The conversation includes these topics:
- Understanding the Chief Analytics Officer role
- Evolution of AI: From IBM Watson to today
- Unique aspects of ChatGPT and direct-to-consumer AI
- Marketing drives broader acceptance of AI
- Technology capabilities driving AI adoption
- Relationship between generative AI and data science
- Best practices for implementing AI within organizations
- Importance of AI ethics
- Broad social acceptance of generative AI and algorithms
- How to build a value case for generative AI
- The optimal implementation process for generative AI
- What are the skills required for a Chief Data Officer or Chief Analytics Officer
- What is the state of gender-inclusive AI today
- How do we create ethical boundaries for AI
- Issues on AI explainability and transparency
- Impact of AI on job security
Sol Rashidi currently holds 7 patents, with 21 filed in the Data & Analytics space and is a keynote speaker at several technology conferences speaking on various topics such as Machine Learning, Data & Analytics, and Emerging Operating Models for organizations taking on transformations in the D&A space. Prior to joining Estee Lauder as their Chief Analytics Officer, Sol was the Chief Data & Analytics Officer for Merck, EVP and Chief Data Officer for Sony Music, and Chief Data & Cognitive officer for Royal Caribbean.
Goal oriented and a team player, Sol believes in uncomplicating the complicated and cultivating environments that are innovative, driven, and collaborative. Sol has a unique ability in bridging the gap between Business and IT, her deep understanding of multiple functional disciplines (i.e. change management, enterprise data, application architecture, process re-engineering, sales, etc.) enables her to drive change by articulating the need for change in organizations that otherwise wouldn’t evolve. Sol played NCAA Water Polo and Rugby for Cal on the Women’s National Rugby Team for several years, and completed the Ironman.
Michael Krigsman: We're discussing generative AI in the enterprise. What can we learn from the history of AI that will help us in business today?
Our guest is Sol Rashidi, who has been a chief data officer and a chief analytics officer at numerous large organizations.
Sol Rashidi: I think folks understand the importance of data and analytics, but they really don't. It's hard to absorb and understand what needs to be done, how quickly it needs to be done, and the element of patience, resources, and focus and attention, and stakeholders.
A lot of my job is really conversational. It's alignment because even though I plan out or can plan out the best vision, mission, and talent pool, and I've got a pretty good balance now between what I call my offensive playbook and my defensive playbook, bringing other people along for the ride has been very, very... challenging is the wrong word. I would say it's the number one thing in every organization that continues to be front and center.
That's just how team sports are, you know, whether you're the strongest link, weakest link, or a link in the middle, and you're trying to create a space for yourself. That collective effort in team sports also applies when you're running a team within a larger organization (and your role happens to be the CDO, CAO).
CAO role, it's a new role. Folks don't necessarily know what to make of it.
My running joke when I first join a company is, "What do you think my job is?" And if I ask 100 people, I'll get 300 different answers. But there's still a lot of linkages of playing sports and group sports and this role.
Long story short, I'm in my fourth position and term now. Some say I was one of the first generations to be hired as a CDO. It's been an interesting ride and, in every position, I continue to learn more.
It's an amazing space to be in because, even in this conversation around AI or generative AI, ChatGPT, folks forget it's not about fancy algorithms. The fundamental component of all that still continues to be data.
Michael Krigsman: You've had a lot of different roles going all the way back to IBM Watson. When you look at that evolution, what do you see as changed and what has that evolution been?
Sol Rashidi: My Watson days were 2012, 2013, and part of 2014, so two and a half years. After 2011 when Watson beat Ken Jennings, IBM said, "Okay, we're ready to take Watson to market."
I was really privileged. I had some leaders who I was just like, "I'm ready. Put me in, coach." They believed in me, and so they put me on the Watson team.
Fast-forward, it's 2023. It's ten years post. I know some people will disagree, and a lot of people relate to this. Not much has changed.
The interesting this is, in terms of that comment, what I'm saying is identifying the proper use case for applied AI is still the number one challenge. Needing a corpus and a data set to train on still hasn't gone away.
Building questions that you can ask and train the model on, so you could say yes versus no to help increase the threshold of accuracy, is still the same. Finding individuals who have the time and patience that you could take them out of their full-time jobs or make this a part of their full-time job so it now becomes a part-time job, that still remains the same.
Time, patience, cost, that still remains the same. This isn't a switch that you light on. I think people forget that.
In Watson engagement advisor went to market or Watson as a whole, the corpus, the data sets, training in the models, getting people to teach the algorithm right from wrong so that it has that accuracy level, you still have to do all of that whether it's with ChatGPT and you want to apply it towards enterprise use or any other generative AI model. None of that stuff has changed.
I think the uniqueness about the current trend and what everyone is talking about is two-fold.
One, the channel strategy. It's the first time that something has been available direct-to-consumer whereas any other AI software historically has been meant for enterprises, so it's been predominantly B2B.
I think the second component that makes it truly unique is years in development. Before Microsoft even invested in Open AI, they had $2 billion just in research funding and eight years in development before they released ChatGPT in November.
I don't know if folks know that they've been around eight, nine years purely developing this thing that we now see as ChatGPT that we think is a miracle and magic. There was a lot of time and effort and energy put into it.
Even openly, part of their go-to-market strategy is it's not perfect. It's not going to be accurate. There are a lot of hallucinations going on.
Not dissimilar to Watson, but it had more years in research, engineering, and development, and the channel strategy is very different. It's not B2B. It's direct-to-consumer.
Michael Krigsman: You're saying that essentially an important fundamental difference is the marketing of AI today as opposed to the data aspects.
Sol Rashidi: The data aspect is still the same. You're right. Nothing has much changed.
There is some element in marketing because of that unique channel strategy. They did go direct-to-consumer.
As everyone knows, universities and college students, everyone is using ChatGPT as an aid. It's definitely taken hold. There's no doubt about it.
I think the other thing that's fundamentally different is not dissimilar to Google Glasses. The world has to be ready for something, and we weren't ready necessarily ten years ago. We weren't ready 20 years ago.
The world of AI was created in the '70s by a bunch of brainiacs from Dartmouth. If you go back and think about it, that was 50 years prior. We weren't ready.
A lot of companies have phenomenal ideas, phenomenal products and platforms but the audience, the mass adoption has to be ready to understand and have that intake. Otherwise, it's not going to go anywhere.
I think we're finally at a point where yes there's an element of marketing to it. I think we're kind of ready to listen to the story.
Is there still some hype at an enterprise level? I do think so. And we can all use Metaverse and Decentraland and what happened a year, year and a half ago, and NFTs and crypto as examples.
Now there's debate as to whether they have longevity. But the fact is, it was what everyone was talking about a year, year and a half ago, and no one can deny that.
We've been talking about AI for a really long time, and Watson did an amazing job of introducing AI at scale, commercializing, democratizing artificial intelligence, making it an acronym that now everyone is using. I think we're now at a point that folks are just more open to listening to it because they can see the benefits and the impacts it has on day-to-day, whether you're an individual student, nurse, any time of knowledge worker, or if you work for a software company that just needs to accelerate things.
Michael Krigsman: Do you think the reason for this broader acceptance today is due to technology capabilities such as better hardware? Is that the bridge that makes it more apparent to everyday users what the benefits can be?
Sol Rashidi: We are in a constant pace of exponential change and not incremental change. And the pace of change is as slow as it's ever going to be today than tomorrow.
Do we have more compute power at hand? Absolutely.
Do we have more storage available to us? Absolutely.
Are things cost-prohibitive now? Not really.
You can buy a five-terabyte jump drive on something that's two inches by two inches for $150. That wasn't the case 10 years ago, and it definitely wasn't the case 20 years ago.
There's definitely an evolution in compute power and chips and storage that's made it not cost-prohibitive anymore. And so, I think that democratized, if you will, applications towards AI and more people are prone to using it.
But I think the direct-to-consumer channel strategy and the different use cases that individuals have used and seen and brought to light is also creating this halo effect that's having an amplified effect, if you will. It's not different than when a few companies decided to go open source.
Why would anyone go? Well, the best creators are the ones that have access to other creations because it inspires. It allows them to think about better, newer, faster ways of doing things. And so, by having this now being available (previously for free and now, let's say, $20 a month), it's generating other ideas and I think it's creating this crowdsourcing capability.
Now everyone is talking about it, so there's an openness. There is an element of, "What can this do for me?"
I think AI ethics is something we still need to talk about. We haven't necessarily touched on that just yet.
It's around the corner because it hasn't caught pace with the pace at which AI innovation has caught pace. But I think we're now in place where this is here to stay. The question is, how, when, where, and with whom, and the pace at which it's going to penetrate everything we do on a daily basis.
Michael Krigsman: What is the relationship between generative AI and data science? We know that the data is the essential aspect, but can you help draw that link for us more explicitly?
Sol Rashidi: There's a codependency to a certain degree, but it's not a one-to-one relationship.
The word generative, in general, for me, is a little bit funny. [Laughter] But it's here to stay because everyone is calling things like ChatGPT as generative AI.
But if we had to classify a specific or categorize a specific sect of AI, and we call it generative, that's not a problem. But it really means that we're creating unique content based on existing content.
We're retrofitting what has already existed and surfacing it in a new way. But it's not original content, per se. That can be applied to imagery, audio, voice, language, and so I think that's where the word generative came from. It's giving AI an identity that's specific to some of the retrofitted content that we're seeing right now.
Now, if you look at the space of data science, well, there are many specialties within data science, and it can be anywhere from computer vision to natural language processing to robotics and automation to deep learning and neural networks. Depending on the use case and the application, we are fundamentally leveraging generative AI.
Depending on your use case, you can apply one or many of those disciplines within data science. But it's not one-size-fits-all. There is a codependency, but it's not a one-for-one relationship.
Just because you know data science doesn't mean you know generative AI. [Laughter] And just because you know generative AI doesn't mean that you know a discipline within data science.
Michael Krigsman: Let's talk about the implementation of AI within organizations and generative AI. Can you describe what's the best way to go about it?
Sol Rashidi: What I want to avoid is what happened with Metaverse and Decentraland and everyone jumping on board. But organizations and enterprises being disenchantized because they didn't necessarily see the ROI whether it was brand equity or monetary or relevancy with the investment that was made.
I don't think that's going to be the case here. But I think what folks aren't familiar with (because there are very few folks in the industry that have actually gone through the painstaking effort of deploying – I'm just going to say AI, not generative AI) are use cases within an organization.
(00:11:34 ) We use the term a lot. It's kind of like a kitchen sink item now. Whether it's a buzzword or a marketing word, everything has AI in it.
My challenge with some of the software providers is, you're running SQL scripts with a really magnanimous decision tree. That's not really AI. You're working with a limited data set. It's still not really AI.
I have this drawing that I created to define what is AI, and I've shared it with a few folks. I may share it here a little bit later on.
But the things that folks have to remember with generative AI deployments is, one, use cases and application. It's not going to solve all problems, and sometimes you need a hammer and not a jackhammer to solve a business problem.
I think, too, you also have to understand that your biggest roadblocks of deploying something like this is going to be risk and compliance. Folks aren't thinking about that because when you create corpus, when you identify the data set that you fundamentally have to train on, well, if you're going to use technologies like open AI or other companies, that data is not sitting in your environment.
People forget that you have to fundamentally give that data and put it into another environment that fundamentally has deployed the compute power and the neural networks needed to be able to ingest, decipher, and make meaning of that data so that you can train it. I don't think folks have given too much thought about that.
You have to define the use case, define the corpus and data set. Then you have to give your data and put it into another environment that has the capability that something like ChatGPT has. Then you have to actually deploy a small army of individuals (whether on a full-time basis or part-time basis) to constantly ask it questions so that it can start understanding right from wrong and it can hit that level of threshold that's acceptable for enterprises to deploy capability within AI.
That takes time. It is not a switch you turn on.
If you think about identifying the data set and putting in someone else's environment, you can only imagine that the risk and compliance teams, that's a big red flag for them. The majority of your time is going to be spent on how do we secure the data because we now want to introduce a capability.
I think those are some of the things to consider that I don't think we've really fully thought through yet.
Michael Krigsman: We have an interesting question from Arsalan Khan on Twitter who wants to know, "Have movies and influencers created a good or bad view about AI and its uses?"
Sol Rashidi: I think part of that chasm, that hockey stick curve that we've now officially sort of jumped over is because of the movies in AI. If you think about ten years ago, there were a lot of robots taking over the world. [Laughter] I'm not going to go into the Transformers.
There were a lot of movies prior to Transformers coming out about nascent technology can be dangerous because eventually you teach the machines to be stronger and more capable than the humans. Then they eventually can think on their own, and they can take over. Then we humans sort of become the pee-ons of the world that we originally created, if you will, and civilization goes to dust.
I'm not saying that it's true or not true. I think that's a very fear-based approach towards it. And I think that there is recognition that we need to start focusing on AI ethics.
Recently, I posted on my LinkedIn profile that there is a six-month pause request from the Elon Musks of the world saying, "Listen. Before we train this past GPT4, let's make sure we give this consideration of who we should put this in the hands of," because this can be dangerous if misused because, as we know, there's a lot of fake news, fake imagery, fake a lot of things.
While we want everyone to have a moral compass, not everyone is driven by that moral compass of what's right for humankind and humanity. I think, before we democratize this capability, which it will eventually become democratized, it's time to take pause for those that have real influence to make sure we do it at ethically and put it in the right hands.
While movies, I think, have created awareness and a fear-based approach of if we were to leapfrog into nascent what could happen, I think the real influencers of today understand the implications and the power of it and they're wanting to take pause and make sure that we're deliberate about its introductions so that there's a balance for us.
Michael Krigsman: The next question is from Chris Petersen on Twitter. "Is some of the acceptance of generative AI today due to algorithms, people being more comfortable with algorithms and algorithms playing a more prominent role in our lives?"
Sol Rashidi: It's funny. I think it depends on the company, and I think it depends on the crowd you hang out with. If you look at really large enterprises and institutions, those that are very creative in nature, those that are very content heavy, those that are heavy in imagery and audio and language and brand and marketing, they're still fundamentally organ rejection with this because experience, intuition, so-called keeping your finger on the pulse is still how they drive the business.
I've been in many situations where we're trying to develop a go-to-market strategy across a new channel and a new consumer cohort, and the model says that you've not tapped into this consumer segmentation, this consumer cohort, and there is an opportunity there because, based on their purchasing buying patterns and cross-pollination, you have an opportunity here.
But I still fundamentally have conversations with execs that say, "No, that's not our... It's just not going to work."
Well, how do we know it's not going to work? We've never tried it.
By the way, the model says it will work. Can we try it? But there's still an apprehension of doing it.
But there are other cases and other individuals that are like, "Listen. We're doing really well, but we're not doing well enough. I know we can do better. Let's explore. Let's experiment, so what do you got?"
I'll show them some examples thanks to some phenomenal talent and teams that I've led in the past. When they're open to it, that's I think where the magic happens.
I don't think it's companies. I don't think it's an error thing. I think it still comes down to the individual who are willing to experiment and test and learn and let algorithms potentially influence how they go to market, new channels, new opportunities. For as many who are open, there are still those who are very apprehensive in letting the model or the algorithm run the show.
I think the second layer on top of that is algorithms without context are completely useless. It's not pure mathematics because math, in and of itself, can be very manipulated. But context around business language, business vernacular, trends, what's happening externally is super, super important.
Context is everything when it comes to numbers and what the data shows. If your data scientist team or your analytics team hasn't incorporated that, you're looking at values that are absolute without it surrounding variables, and that in and of itself can be very dangerous.
Michael Krigsman: This is from Wayne Anderson on LinkedIn. This is an important one. Basically, "How do you pull together the value case, the most critical factor for generative AI?"
Sol Rashidi: I think if you take a look at the amount of time and energy and focus and attention it requires, I'm not sure there is a silver bullet when it comes to the value case. But in the storytelling, in the narrative of why we should do it, I would say value and ROI, there are a few different angles.
- There's always, let's do this because we're going to make money, so the monetary ROI.
- There's always the case for, let's do this so that we're always relevant, so the relevancy ROI.
- There's always, let's do this because we need to make sure that our talent is up to grade and aware of what's happening in the marketplace. We don't need to be the best of the best, but we need to be able to understand where the market is moving so that we're not behind and we're not having to catch up three, four years. That's what I call cultural ROI.
When it comes to value cases, I always say, "Let's not do this for the money. We're still testing and learning. But let's take a look at a few real problems (a super, super simple one)." And it may not be sexy, and it may not be glamorous, but training and onboarding.
When you hire new talent (and it could be within the customer service department), again, not sexy or glamorous, but the amount of PDFs that they have to go through and read and understand and information they have to digest and literally regurgitate so that before you put them on another 1-800 call that came in, is astounding. It takes three to four months, on average, to onboard one customer service rep.
Well, I think, with certain applications of AI, you can shrink that down to three to four weeks because they don't need to know everything before they actually go live. They now have a virtual assistant (for all intents and purposes) so that if they don't know the answer to something, they can quickly look it up without having to depend on search in a browser (which in itself could be clunky depending on what you use) or have to flip pages to understand what chapter they studied and where it was. They can, in real time, have a conversation and be productive because they have that assistant available to them.
Something that's going to require you – and you can say that the cost of that talent for three to four months before they become operational is essentially your ROI on this case and the value proposition. Onboarding can now be three to four weeks with this deployment.
Now, if we only deploy it within one division, one section, and across five to ten reps, you're not going to see the value. It's too small. But if you're now saying that, Okay, our pilot is going to be that but our intention is part of our onboarding process moving forward that we're going to operationalize globally is this virtual assistant so that we can make our knowledge workers productive within a matter of three, four weeks," now you actually have a business case and a value case. That's worthy of investing in.
Michael Krigsman: Then describe for us what that implementation process should be. What is the optimal implementation process?
Sol Rashidi: Don't suggest it if you don't think you're going to get attention for a year and a half. I will be honest. The shortest I've ever deployed something has been six to seven months. That is with all the resources, all the investment, and all the attention, which in today's age isn't the case.
Give me one company that doesn't have a million strategic priorities and they're competing for the same resources and funding bucket. So, if this is just another moonshot project that's glamorous and sexy because everyone is talking about it now, I'd probably say, "Lay back. You're not ready yet."
But if it's something that they're willing to invest in, go all-in on because they see what it can be – and a lot of this is the art of the possible in that it's a two-, three-year journey and they're not going to waiver in the two, three years – then 100% full steam ahead because part of that implementation, again, is the realities of what it's going to take to deploy it.
Okay. First, you need a team to identify the use case. Then you need to have discussions on what that use case is. You've got to get buy-in. You've got to get stakeholders. That's part of just pre-project kick-off stuff.
Then you've got to outline and find a team that says, "Okay, here's the data sets we're going to need to define the corpus so that we can train the model on (as an example). This is the type of data we're going to need. And by the way, this is the type of data we are going to land in someone else's environment."
You've got to go through risk. You've got to go through compliance. Before you even start the project, you've got to get alignment, buy-in on the use case, and you've got to get risk and compliance to be okay. And that is not an easy conversation.
There is a lot of negotiations back and forth because the goal is to enable a company, not stop a company from growing. But you need to be able to do so in a very... Risk-averse is not the right word. Cautiously and deliberately so that you're making very specific decisions knowing what the risk reward is going to be.
Once you have alignment, which in and of itself can take three to six months—and that's average. There are companies that are going to take a lot longer—then it's finding the team that can focus in on this. Oftentimes, you don't have internal talent ready to go because they've never deployed an AI model or generative AI model. But you've got great talent who is willing to learn and have the aptitude, so now you've got to either pluck them from other managers and put them on this project or you've got to find who they are.
You definitely don't want individuals who have idle time and aren't on a project. [Laughter] That's not the talent you want on this. You actually want the people who are in demand. That's going to take another three to four months to be able to build that talent pool.
Once you've got alignment, once you've found the talent, you're already six, seven months in. Then it's fundamentally, "Okay. Here's the data. Here's how we're going to secure it. Here are the hundreds of questions we're going to ask to make sure that this customer service rep that's onboard in three weeks, if they were to ask it a question like, "What's my average sales within North America" or "What other markets do we sell this product in?" the answer back is with high confidence and accuracy.
Then you have to find who is going to create the questions. Well, guess what. You've got to go to your most knowledge and skilled customer service rep or team to create those questions.
Then you have to overcome, well, does that mean job security issues for them? They've built their reputation based on how much they know, the institutional knowledge, and knowing the things that most people don't know because they've had the tenureship. Now you're asking them to document all of that. Well, you've got to overcome that as well.
Then you teach. You ask the Q&As from the data set. You get it to a point where you're like, "Okay. It's at a 95% threshold of accuracy. We're good to go." Then it's a matter of where you deploy it.
You can imagine this isn't as easy as, "Oh, they have an API. Let's run with it."
When you're deploying things like this within large organizations that have a lot to risk and lose, it's quite deliberate, quite thoughtful, quite time-consuming. And so, I would say, my teams and I have built some really, really good things.
I'll be honest, I've made a career not by deploying these capabilities while I was at the company. It takes two years. A minimum of a year to two years for it to be deployed, for there to be the ah-ha moments. And if you've moved on, you get the credit afterwards, if you will. [Laughter]
But folks don't understand it. They're like, "Why are we doing this? This is taking so much time. This isn't what I expected." You've got to go through all of that while you're with the company.
Michael Krigsman: We have another question from Twitter. "What are the skills and experiences required to be a chief data or chief analytics officer in the next few years?"
Sol Rashidi: When I started back in 2016, I was part of the first gen of CDOs. I was an MDM expert who had done eight global ERPs, and I knew data migrations, data integrations, data quality, data governance. I had the enterprise data management stack and capabilities and competencies and talent lock solid. But that's what I call the defensive playbook.
I hadn't necessarily tapped into the offensive playbook, which is around the consumption layer. We're doing all this great work, and we're building the building blocks and the foundation, but what is it going to be used for?
At that time, I hadn't gotten into building products. I hadn't gotten into developing a consumption layer for digital apps. I hadn't gone into self-service analytics or data science as a service. All that came in, interestingly enough, in my second CDO position.
My first CDO position was just making sure that the backend was structured and scalable and clean enough and create a consumption layer so that others could leverage the information that we were responsible for putting together. The second CDO position was when I really got into that offensive playbook and I got to create apps, I got to create data science as a service and analytics as a service, and I got to create a consumption layer that was leverageable by no-code, low-code, and high-code individuals. That's when I had fun. I had a ton of fun when I got to do that.
That now requires you to understand not fundamentals around enterprise data management but APIs, app integrations, service calls, React and Angular, and UI/UX. Your curriculum, your skill set just completely expanded, and so I had to be a student of my own work. I always say I was flying the plane while building it at the same time, but that was the stuff that I was loving doing.
The third CDO position was more around strategy, vision, and capabilities, but I didn't necessarily have ownership of development. That was really uncomfortable because it doesn't matter how great of a vision or strategy or mission you have. If you're fundamentally relying on another group that then outsources development to another group, and you're in queue for priorities, it just doesn't work.
The quality that you expect to see or the attentiveness and priority that you expect to see just withers. It gets diluted, and it is just not concrete.
Then the fourth position I've had has been very, very marketing-centric, so now I had to learn everything about digital marketing, commerce, and what's considered good versus bad when it comes to UI and UX, attention spans, click per views, impressions, and pixelations.
Every CDO and CAO position I've had has been, fundamentally, the basis is data. You have to know that. But it's been different thereafter.
If you ask me what the next two to three years are going to be, I would say, first and foremost, the ability to story tell. Folks are going to know data is important. They have absolutely no idea what that means.
You have to make something that is not a part of their... Let's say they're the CMO, let's say they're head of procurement, or let's say that they're a genius within R&D. They touch data, but they don't know what it takes to create the capabilities that they're constantly complaining about or issues within the company. You've got to be able to learn the language of each function, each brand, each line of business, and convert what you need to do into something that they can understand in their language.
You, for all intents and purposes, have to be a translator. That skill set I don't think will ever go away, and that's going to be super key. You have to translate into business talk that they understand and relate to.
I think the second is there's an element of confidence and competence that you need to carry with you because folks can be really, really hesitant of trying something new or doing something new that they're not equipped to or haven't been exposed to. So, they need to have confidence in you and know that you have the competence and you've done this before. That's something you just fundamentally need to exude. It's part of the personality.
Then I think, from there on, you've got to have the fundamentals around data, how things work, and get stitched together. But I think everything else is open to what the function and the role is going to be within that company.
There isn't a script. I don't think there's a school that know the fundamentals around storytelling and data.
Michael Krigsman: We have a question from Nasheen Liu on LinkedIn. "Can you share your view on how we're progressing towards gender-inclusive AI." A very important question.
Sol Rashidi: There are still some biases baked in, but I do know that (about five years ago) I started deploying AI to write my job descriptions because there are certain words within a job description, an application (just as an example) that, historically speaking, women fear and men lean in on.
If your job description, as an example, is filled with words like lead, accountability, a strong sense of direction, that can be viewed and there could be a higher probability for men applying for that position. But if you use certain words like collaboration, team effort, collectiveness, shared accountability, what I've been told and what we've seen with the applicants is more women are prone to applying to positions like that.
To answer your question, I don't think we're there yet, but I think that individuals can take it upon themselves to use AI to balance out their everyday application so, at a minimum, when it comes to talent applicants or when it comes to understanding subconscious biases, we can call them out a lot quicker, which previously we couldn't.
Michael Krigsman: Arslan Khan asks an important question. He asks a bunch of questions. "Who sets the ethical boundaries? Who enforces the ethical boundaries? Those who create the boundaries will have power, and so how do we make sure the power is used responsibly?"
Sol Rashidi: AI ethics is a hard one, but even think about the journey of the SEC or SOX controls or ISOC or HIPAA. Those things took decades before they even went from generalizations to concepts to boundaries to guardrails to very specific language in documentations that a governing entity is now responsible for monitoring but wasn't necessary the ones who dictated it.
Those are really good examples. It wasn't necessarily set by one or two or ten or twenty individuals but, through decades, they kind of formed from a concept in something we need to be aware of and do to now there are clear-cut rules of what you can and can't do. Insider trading is a big no-no.
I think AI is going to follow the same pattern. The fear that I have in space is we don't have decades.
We are officially in ChatGPT4, just as an example, and there's already been a six-month request to pause before any further developments because now that we've cracked the nut, if you will, the pace of change is as slow as it's ever going to be today. The luxury of time that we had in the past, we don't necessarily have right now.
I think that's what I'm trying to figure out. Are the influencers, the ones that we all know and love that we hear about, should they be the ones to control it? Is it Sam? Is it Elon? Is it Peter? Or should there fundamentally be an ethics board of individuals, of humanitarians, planetarians, those that are in the technology space, those that are in politics coming together to come up with a more balanced view of where AI ethics should land? One way or another, we need it desperately and I don't think we have years to wait on it.
Michael Krigsman: Can you just talk about the issues around AI explainability and transparency and the implications of what happens if we don't have that explainability?
Sol Rashidi: I think that's what we're struggling with. Everyone is using the term AI. I don't think folks know what it is yet.
I love how it went from 5% of the population who live, eat, and breathe it every single day, but they're not the ones talking about it. Then it went to 40%. Now you've got 90% of the population saying, "Oh, we're doing AI to do this, we're doing AI to do this," and I'm like, most folks are just learning how to spell the acronym.
But we are using the term so general and broad that even things that are basic SQL queries or software development components or just anything is now becoming AI. I'm like, "Well, that stuff existed 20 years ago."
There is a glossiness that's happening to it, and I think that's where the dangerous part is. It's also why it's built such critical mass.
For example, I take it upon myself. When it comes to transparency, just the example that I gave of the deployment challenges and what's changed since Watson, not a lot because you still have to go through the process. That process, you cannot bypass yet.
I don't think most folks know what it takes to truly operationalize an AI model or a machine learning model. Transparency, I think, is one thing that those of us that are experienced (we've had the failures and the successes) just need to continue to share and make it more broadly so that that message resonates.
It is not a silver bullet and not everything needs to be solved with AI. I think it's a challenge we still have that we need to continue to work towards.
Michael Krigsman: Arsalan Khan comes back yet again, and he wants to know. He makes the comment that the largest obstacles to AI adoption are human-related issues such as change management, people with data biases. His question is, "How do you make sure that everyone is on the same page and not afraid of losing their job?"
Sol Rashidi: If I were running a company, if I were the CEO, I would say that you're going to potentially lose this job but that's okay because what we need to do is, as a company, we are expanding and we have more demand than supply. So, I am going to reallocate your experience, your institutional knowledge, and the brainpower that you have that machines aren't there yet with or don't have to a job that's more value-driven and more customer-centric, for example.
To me, there are certain jobs that are going to be replaced. I can't contest that whatsoever. But they are probably jobs that should be replaced, and it doesn't mean that you're losing your job. It means that you get the opportunity to grow into another job. I think that's where folks are forgetting things.
I know jobs where folks have been copying and pasting for 23 years of their life, taking it from one system and putting it into another system. We're in 2023. Those things shouldn't fundamentally necessarily exist in this day and age.
We've got RPA and robotics to be able to do those things, but those individuals are still valued employees. I think there's a great opportunity that if they're comfortable with a bit of change, view it as, "I'm not losing my job. I'm actually moving into another job that's more valued for where the world is going right now." That's an opportunity for growth, excitement, new relationships, and I do believe people are happier when they learn.
Differences between Chief Data Officer and Chief Analytics Officer roles
Michael Krigsman: We have a question from @ajaminarank who says, "How does the industry see differences between the CDO (chief data officer) and chief analytics officer roles?"
Sol Rashidi: I call it CXO. It doesn't matter what the title is. That company is going to have responsibilities and capabilities that you need to deploy, create, set a vision for, and they're hiring you for your competency and your capability.
You're going to get the CDO title, CAO title, CDAO title. Just call it CXO. It's interchangeable.
I will say, though, I enjoyed the CAO side of the house because it's much more business intensive in terms of everything we do, everything we generate is geared towards the business. How do we grow top-line? How do we expand market share? How do we create cost efficiencies? How do we do more with less and better accuracy?
That for me is really fun because I've done the fundamentals around data. Building another platform, building another environment, infrastructure, infosec, data integration, data migration – it's an old hat for me. I'm just not as excited about it anymore.
If I get to focus more on the consumer-facing stuff and building those relationships with the presidents, the brand managers, the label heads, whatever it may be, that's the fun part.
But it's not about the title because companies want to call it what they want to call it. Just call it a CXO. Just be excited about the job you're doing.
Michael Krigsman: With that, Sol, thank you. Everybody, have a great weekend. Sol, we'll see you soon. Take care, everybody
Published Date: Apr 07, 2023
Author: Michael Krigsman
Episode ID: 783