On CXOTalk episode 805, Box CEO Aaron Levie shares insider strategies for enterprises to adopt AI responsibly. He discusses key issues and provides advice on architectures, use cases, data security, oversight.
How can enterprises adopt AI responsibly and effectively? In this interview, Box CEO Aaron Levie shares insider strategies.
Levie has a unique vantage point on enterprise AI gained from working closely with Fortune 500 customers. He sees enthusiasm to implement AI, but open questions remain around risks, architecture, and use cases.
In this wide-ranging discussion, Levie offers practical advice to help leaders:
- Build flexible architectures to swap AI components as tech advances
- Encourage decentralized testing of use cases to find high-value applications
- Form committees to develop standards and identify opportunities
- Ensure data security and access controls are in place before deploying AI
- Maintain human accountability and oversight of AI systems
Levie covers key issues like potential AI inaccuracies, copyright challenges, and the need for thoughtful governance. He remains optimistic about AI’s potential to boost productivity and creativity but emphasizes this transformation is still in the early stages.
With insider recommendations, Levie helps leaders develop strategies to adopt AI successfully.
Don’t miss this timely conversation on the real-world opportunities and obstacles impacting enterprise AI today.
Aaron Levie is Chief Executive Officer, Cofounder and Chairman at Box, which he launched in 2005 with CFO and cofounder Dylan Smith. He is the visionary behind the Box product and platform strategy, incorporating the best of secure content collaboration with an intuitive user experience suited to the way people work today. Aaron leads the company in its mission to transform the way people and businesses work so they can achieve their greatest ambitions. He has served on the Board of Directors since April 2005.
Michael Krigsman is an industry analyst and publisher of CXOTalk. For three decades, he has advised enterprise technology companies on market messaging and positioning strategy. He has written over 1,000 blogs on leadership and digital transformation and created almost 1,000 video interviews with the world’s top business leaders on these topics. His work has been referenced in the media over 1,000 times and in over 50 books. He has presented and moderated panels at numerous industry events around the world.
Table of Contents
- Enterprise AI Adoption - Concerns and Maturity
- Architecting Flexible, Future-proof AI Systems
- AI for Productivity - Identifying High-Value Use Cases
- Box's AI Platform - Connecting Data and Capabilities
- Managing Risks - Inaccuracies, Hallucinations, Copyright
- ChatGPT Impact - Mainstreaming AI and Open Questions
- AI's Transformative Potential - Optimism and Balance
- AI Copyright Nuances - Model vs. Output Ownership
- Practical Steps for Enterprise AI Success
- Preparing the Technology Architecture for Enterprise AI Deployment
- AI Model Building Amid Hardware Constraints
- Data Readiness for Enterprise AI Projects
- Perspectives on AI Regulation and Oversight
- Avoiding AI Overregulation - Thoughtful Governance
- Elon Musk and Today's Surreal Environment
- AI for Better Decisions and Leadership
- Enterprise AI Adoption - Final Recommendations
Michael Krigsman: Today on Episode #804 of CXOTalk, we're discussing enterprise AI with Aaron Levie, Co-founder and CEO of Box.
Aaron Levie: We have a fairly unique perspective on the AI conversation simply because of how much data is inside the Box platform. We have hundreds of petabytes of content, tens and tens of billions of files. And in every single one of those documents, it could be a contract, a marketing asset, a financial record, an HR document. Every single one of those documents is incredibly rich and valuable information.
The types of concerns and the types of conversations we have with our customers are, "Okay, how do I actually bring the power of large language models to all of this data? How do I keep my data secure while interacting with it through the large language models? How do I ensure I'm remaining compliant with all of the different industry standards that I have to be able to support now that I'm using AI inside of my business?"
There's a very wide range of, I think, challenges but also opportunities that AI presents. They really want to lean into AI. They want to have an AI-first business and business model, but they need ways of connecting AI to their data.
Michael Krigsman: What about the maturity of their usage? If you talk with people in larger companies or midsized companies, I'm sure there are varying degrees of AI maturity.
Aaron Levie: It's a situation where I think everybody is probably kind of relatively immature compared to what the technology is capable of today. At the same time, probably the same level of maturity relative to just their peers. I'm not seeing a lot of companies that are years ahead of everybody else in the field.
We've had a breakthrough technology just in the past 12 months, which is ChatGPT and some of the underlying large language models that power ChatGPT. And so, it's almost impossible for any one company to be that many years ahead of the rest of the industry.
I think we'll look back and realize this was a period where we were extremely early in an incredibly long trend and transformation of the enterprise. And, at the same time, I think companies are aggressively leaning in, trying to figure out where is the big opportunity for AI in their business.
We hosted a dinner last week with about 15 CIOs of a range of different industries: financial services, real estate, media and marketing, and entertainment. The amazing thing was, universally, everybody was diving right into AI. Everybody was diving in with a similar degree of maturity; building out their architectures and frameworks and taskforces around how they're going to leverage AI in their business.
Yet... Yet, every single one of them was insanely early in the journey. Very few companies had done sort of wholesale kind of company-wide production deployments of large language models. It was a lot of experimentation, a lot of testing on different teams in different departments.
There were a few that had leaned in maybe more aggressively but we're so early. I think we're going to look back three, five, or ten years and this is going to be just scratching the surface of what's possible with AI is what we're seeing today.
Michael Krigsman: What I'm taking away from this is you're seeing a lot of enthusiasm, but there's also a lot of, we could say, confusion about what the opportunities are, even what some of the challenges that may emerge are, and how to invest in this thing that's this incredibly moving target.
Aaron Levie: I want to find the right word for confusion in the sense of I think it's actually fairly structured and fairly methodical what customers are working through. It's some kind of enlightened version of confusion, which is sort of, everybody has some pretty clear perspective on what they like to do. But we are so early.
What do these architectures need to look like? Where should companies be investing their time and resources? We haven't developed the patterns as an industry yet.
When you think about the reference architectures over the years – How do you manage databases? How do you manage the cloud? How do you manage dev ops? – we don't have these reference architectures for AI yet. All we really know is that we are in this massive tidal wave of large language models and an incredible kind of leapfrogging of these platforms.
But we're so early and the reference architectures aren't there. And so, companies are all trying to kind of figure out (from their peers and from the industry and from vendors) where all of this is going.
A lot of our conversations with customers really orient around how do you create the highest degree of optionality. Today, OpenAI is very clearly the leader in terms of the most advanced AI models. But at the same time, we're seeing breakthroughs from other players.
- How do you have a high degree of optionality where you can swap in different components at different times?
- How do you have an architecture that is future-proof to all of these trends that are going to continue to happen in this space?
- How do you ensure data protection and privacy and security so you don't have any kind of leakage of your data between models or, accidentally, users are getting information that they shouldn't have access to?
- Then how do you test, iterate, and make sure that you're scaling with the rate of how this industry is changing?
A lot of our conversations with customers are more of that kind of philosophical level. How do you implement a strategy in AI that lets you get the most out of it (not just today and right now, but over the next three, five, and ten years)?
Michael Krigsman: What about this investment issue, since there are so many unknowns at this time? What do you tell your clients, your customers, about how to invest? As you said, there aren't the reference architectures yet. But we need to be doing something. So, what should we be doing?
Aaron Levie: A lot of our conversation with customers is trying to figure out where is there so much untapped value inside their enterprise relative to the data that they have. So, where are there insights? Where are there productivity gaps? Where could you be making better decisions? Where could you be enabling employees to work more productively based on the information that you have inside of your enterprise?
In some companies, that's going to be in their go-to-market teams. How can they sell to their clients faster?
In some companies, that's going to be in R&D. How do you develop products more quickly (leveraging your data or using AI as an assistant to make you more productive)?
We're working with companies, again in every single industry. I think where the value creation is in that industry is going to really lead you to where the biggest AI outcomes are likely to be.
As a software company, we spend a lot of time trying to rapidly build software. We want to use AI to help our engineers be very productive. And at the same time, we need to be able to sell and market and bring that software to our customers. So, we want to use AI to facilitate how we actually get this technology in the hands of our customers as quickly as possible.
I think businesses that really understand where they have a tremendous amount of data, what is inside of that data, what could that data produce for their organization that would give them some form of extra value, whether it's again what they're building, serving their customers, employees culture, operations, and that's where AI can have some of the biggest impact in their business.
At Box, you're an enterprise, and you're having to deal with these same issues in many respects. It's a moving target. How do you manage it?
Aaron Levie: We have two big components. One is, how do our employees leverage AI?
In that case, fortunately, we're able to use a lot of our own technology, so we announced a product called Box AI that allows you to quickly ask questions of any kind of document or information. It lets you generate content inside of our product as well.
We use Box AI to do everything from whether it's drafting a new communication for a sales rep or summarizing a critical contract that we're working on accelerate the contract lifecycle. We want to use AI across our entire business to make all of our employees more productive. Then equally, we want to build AI into our product in a way that gives our customers an extreme amount of leverage in how they bring large language models to their data.
We've had a lot of companies come to us and say, "Hey. I have thousands – or tens of thousands or millions – of documents that I want to be able to use AI against. I have a bunch of invoices – or contracts or lease agreements – that I want to be able to use AI to extract information from – or summarize or provide some kind of expertise or opinion on."
Instantly, what happens is, when they start to think about that problem, they're thinking about, "Okay, so how do I convert these documents into an AI-ready format?"
- I have to create vector embeddings on these documents and store them in a vector database.
- Then I've got to be able to run AI models against the data inside that vector database.
- Then I have to have an interface that end users can interact with.
- Then I have to have security, permissions, and access controls of who can access that data.
Very quickly, this is a multi-hundred engineer-type problem that our customers are dealing with. This is exactly what our platform is meant to go and simplify.
If you think about what we've been building for well over a decade and a half is a platform that lets you securely store, manage, share, provide the access controls for, and interaction of all of your unstructured data, all of your unstructured content. Now what we're doing is connecting AI models to that data in a very secure and compliant way.
For us internally, building out our platform, the way we think about it is, how do we have a modular architecture where we can bring in AI models from any vendor that our customers are going to want to work with over time and be able to help our customers take full advantage of all of the amazing innovation that's happening today?
Michael Krigsman: Please subscribe to our newsletter. Subscribe to our YouTube channel. Check out CXOTalk.com. We have amazing shows coming up.
What about from a business process standpoint? You just spoke about it from a product standpoint. But what about the use of AI in a practical way inside Box?
Aaron Levie: We encourage our employees to use AI in any way that can make them both more productive but also where we're protecting any kind of risk in our business.
The classic example is if you use AI to, let's say, help you optimize code that you're working on. You are still responsible for what you ultimately put into the codebase (as an individual). There's no way that you could ever say, "Hey, AI told me to do this, and I did it, and it didn't work, and so I'm going to blame the AI."
The employee still retains and remains fully responsible for everything that happens in their use of AI. That's the first kind of most important part of how we think about AI.
We also make sure that employees don't use any kind of sensitive information, any customer data, any personal information in AI models that we don't have full control over. This is why having dedicated instances of AI models is very important. And ensuring that there's no training or sharing of any data that happens when you're using that underlying AI model.
We're fortunate because, within our own product, Box AI, we're able to leverage dedicated, isolated, safe and secure AI for all of our data. And again, that can be anything from a sales rep drafting an email that's going to go out to a customer and they want to get AI to help them very quickly with that. It could be a lawyer that is summarizing a contract and quickly extracting critical clauses that they need to get from that contract. It could be a marketing professional at Box that needs help on what's the message for a particular marketing effort or a new press release.
We use our own product, Box AI, for helping us across all of those types of use cases in the business. In some cases, that might be a 5% or 10% productivity gain. In other cases, it could save an employee hours and hours from a task that they were otherwise going to do.
Michael Krigsman: How do you manage these hallucinations or, more broadly, the risk of inaccuracies seeping in?
Aaron Levie: We spend a tremendous amount of time on this issue of hallucinations. If you just sent a document to any kind of average AI model, you have a high risk where, as the AI reads that document, there's a high chance that it's going to hallucinate to some extent.
We spend a tremendous amount of time in our back-end systems in terms of how we do the vector embeddings of the document, what AI models we use in terms of where we send the underlying content, and the prompt itself that we've really spent a lot of time on working to ensure that there is as little hallucination to no hallucination as possible. We have worked to make sure that we remove the likelihood and reduce the likelihood of the AI model coming back with an answer that is not based on the underlying text that we're giving it.
Then we are continuing to work on improved ways of offering citations. You have instant ways of seeing, "Hey, this is why the AI gave you this answer or this recommendation."
You as a user have full control and visibility and sort of explainability on how the AI came up with what it came up with. That's going to be incredibly important as a fundamental user right in the future is to understand, well, how did AI make this recommendation of what healthcare outcome to recommend or what financial suggestion to make or what were the meeting notes of that meeting?
You want explainability in the underlying recommendations from the AI model.
Michael Krigsman: Building in that audit trail capability, essentially.
Aaron Levie: That's right. That's exactly right.
Michael Krigsman: We have an interesting question from Twitter. This is from Arsalan Khan, who is a regular listener. He asks really thoughtful questions. Arsalan, thank you for all your questions.
He says this. He says, "By and large, today's push for AI is because of the release of ChatGPT back in November." "Why," he says, "were enterprises timid to really get involved with AI before then, and what are enterprises timid about today?"
I'll just mention that I think one of the answers (at least to the first part – why were enterprises timid before) is this is a marketing thing. ChatGPT is marketed AI.
Aaron Levie: The ChatGPT moment was incredibly unique and idiosyncratic where it was the combination of just, frankly, the best large language model that we have ever seen up to that point, which was GPT 3.5. This is where they had the reinforcement learning mechanism with human feedback.
That was really important for tuning the model in a way that provided you good answers. You had both the model itself as being a unique moment and then the interface of having a conversational UI to a large language model was also really important.
For those of us that have been doing this for a while, you'll know that you'd go to openai.com maybe a year and a half ago, and you had this sort of playground environment. Really, all it is, it's just one text box you have to type in, and then the AI model kind of continues on the text that you were writing.
But that's not an intuitive interface for the average consumer. It's really, again, a sandbox playground-type environment.
It was that combination of a chat interface and a very, very powerful model that I think created the ability for consumers to see, "Holy crap! This is the power of AI right now." I think, in a really weird way, it probably would not have been possible for everyone to have this kind of form of enlightenment and excitement prior to a year ago.
We're really kind of only about a year into this modern wave of AI. It really was the iPhone moment of AI because it was the first time that we had a commercial, at-scale way of experiencing what AI is going to be able to do for us.
At this point, if we look at it, we're almost a year in, kind of the first anniversary of ChatGPT – about ten months in. At this point, I think there are probably still more questions than even answers, which is: Okay, how do we deal with the privacy of AI? How do we deal with the copyright potential challenges?
We're seeing examples come out from the various judges that say, "Okay, you can't patent or you can't copyright the works that an AI model is producing." That's going to have some really new and interesting consequences, which is, if you're a marketing team or you're a product development team and you use AI to help you with some kind of part of your product, how much of that is going to be patentable depending on which part AI played a role in creating?
There are many more questions at this stage that I think ChatGPT has thrust onto the market. But it's actually good that it's happening that way.
This is something where we need some kind of trial by fire. We have to actually see were the edges of this technology are and how the market evolves.
This is not something you overthink. You have to explore, experiment, and have real examples to figure out where the boundaries are here.
Michael Krigsman: The second part of his question is, "What are folks in the enterprise (business leaders and technology leaders) timid about today as far as adoption goes?" I think you've touched on some of it.
Aaron Levie: The bigger things are usually around security, privacy, copyright challenges, risk, litigation – all of that. These are very real open questions. I think companies are making great strides to helping companies work through this and deal with this.
Within Box, for instance, again, our commitment is ensuring that no customer data is ever used to train any model whatsoever. That's a very important commitment.
We have a commitment that there's no logging or exhaust that happens in the process that improves the model.
We also, because of our architecture, ensure that there's no data leakage where one user might be using AI to access data that another user-owned or had access to. You ensure that there are firewalls between what each individual user's experience is in terms of the content that they can access.
These are the things that customers are, I think, very rightly concerned about, and these are the areas that our platform is meant to go and help customers with.
Michael Krigsman: How do you think about this copyright issue or some of these things you were just describing? They're very tricky, thorny issues.
Aaron Levie: We're still very early into both the precedence and the case law that gets set for this.
I think it's a really interesting tension where there's no question that there are sort of two sides to it. There's how the model got created itself and how much copyrighted work went into the model. Then the other question is, can you copyright things that the model produces as an output?
I think, in both of these things, we are at the start of just an unprecedented period of how this technology works.
There's a really interesting question of what is the role of the rightsholder in the training process.
We're seeing great examples where you can block access to your content from being trained on. Having that right is obviously a great idea.
There are some examples of companies saying, "Okay, we want to get paid for access to our information or for the rights to our data," and that also makes a ton of sense.
I think there's going to be some complexity if the models kind of attempt to keep track of all of the rightsholders inside the model and then paying some kind of residual or some kind of fee for usage. That feels something that will probably be impossible, technically, at scale to really achieve.
We're really going to have to figure out how these models get trained and how you keep track of some of the rights elements in that.
Then on the other hand, you have what the model produces and how much of that is copyrightable. I think we're, again, witnessing some really interesting cases right now that are centered around this.
Then there's this question of how much did the human do and how much did the AI do, and where in that continuum. If the AI did the vast majority of the creation, then I think it's reasonable that a human is not going to be able to get a copyright for that output.
This is a very nuanced topic because we have to decide, what is the percentage that humans do? For 100+ years, humans have been using some form of mechanical technology or computer technology to produce things that we then patent. And so, clearly, we've already had so much precedent for this idea that I'm going to use a technology to create something that I become the rightsholder for.
AI is just another accelerant of that trend. But when does it crossover into something that now the computer is doing the vast majority of the work? And how do we think through that?
Michael Krigsman: Just laughing to myself as you were talking because you could define a percentage, but it doesn't work that way.
Aaron Levie: Right.
Michael Krigsman: The two become infused and melded. It's like a mind meld of the two. How do you separate it?
Aaron Levie: Just like Photoshop, let's just say. If I created an image in Photoshop five years ago, do I own the copyright to that? Yes. But clearly, Photoshop is giving me tools that allow me to produce something that I would never have been able to produce before.
AI is, in many respects, another step in that direction. But there is something about the AI doing the extra work of the creative process or the production process that exceeds our typical way of defining tools. And so, it's giving us all this new surface area and ground to cover that we're going to have to deal with from a legal standpoint.
Michael Krigsman: Given all of this, you've laid out the challenges quite well. What do you recommend that, again, business leaders and technology leaders actually do today?
Aaron Levie: What we're seeing a lot from our customers is, first, setting up these kind of cross-functional groups to go and identify where AI can be applied in the business and how they can apply it safely, dealing with the legal, compliance, and security decisions.
For literally the past nine months, we've had a standing working group where we have representatives from privacy, legal, security, compliance, the business, and technology come together and say, "Where are we going to use AI in our business to be as productive as possible but do so in a way that is legally safe and compliant with the industry standards that we have?"
I'd first recommend all enterprises build some form of committee or working group that can help with that. Then the second thing is you need to make sure that you have a framework or an architecture that allows you to, again, have a future-proofed AI strategy.
There's so much changing in the industry right now. I think it behooves most businesses to have a high degree of optionality and range of motion of how they implement AI in their business because we are so early in this next wave of AI that it's going to be really important that you don't over-lean into one particular path or direction. That's the second piece.
Then the third and final piece, I would say, is you need to test. You need to get ideas from the business. You have to get use cases from the actual users.
I think being maybe a bit permissive on what people do with the technology just so you can learn from them, but short of being fragmented and too decentralized that you get utter chaos.
We're seeing some customers do hackathons where they want employees to test with AI to figure out if some business process could be improved because of AI. Or they're letting their employees start to use AI for different use cases, and then they want to learn from those so they can standardize across the business.
Those would probably be the three key things: a taskforce for figuring out the standards; optionality in the core architecture so you have a future-proof architecture; and three, test, iterate, get some of that kind of decentralized use cases going.
Michael Krigsman: The technical architecture is a very important foundation, getting that right.
Aaron Levie: It's a wave of technology where your architecture will define your strategy. The architecture you land on for using AI on your data will 100% create a fixed outcome of how you're going to be able to leverage AI in the future.
Getting that right, having a high degree of flexibility, understanding that there's a lot of different model competition right now (so you want to be in a position where you're not overly wedded or stuck to just one particular model provider, given how much competition there is), that's going to be super important for enterprises.
Michael Krigsman: Can you elaborate on that? That seems like a very important point. You just said that absolutely your technical architecture will determine your strategy.
Aaron Levie: We're seeing a lot of companies just struggle with, okay, what kind of bets do you make right now versus where do you want flexibility? Jeff Bezos has this line that I think is just probably the most important way to think about a lot of technology, which is one-way doors versus two-way doors.
I'll start with two-way doors. Two-way doors, you basically can go in and out of them and change your mind. You can iterate, you can pivot, you can move around, you can adapt quickly.
One-way doors, once you go through it, you're sort of stuck. You're on the other end of that door and there's no going back. Everything you do subsequent to going through that door, you are stuck with that decision.
I think AI offers a lot of these two-way doors versus one-way-door-type decisions.
A two-way door in the AI world is, how do you have an abstraction layer from your business process or your data from the AI models? That way I have the ability to swap in different providers or different components over time as there's more advanced technology or more advanced breakthroughs on the AI front.
A one-way door would be saying, "I'm going to be stuck with one particular model or one particular AI paradigm right now, and I'm going to build right on top of that." Now, that offers some benefits because any time you do vertical integration, there's some efficiency gain for that.
The challenge, of course, in AI, I would say, is on a weekly or at least twice monthly basis, we are seeing some breakthrough in AI where one company is leapfrogging another. We have OpenAI. We have Anthropic. We have Google. We have Amazon. We now have Meta releasing Llama and Llama 3.
You have this incredible set of leapfrog events that are happening in the industry. I think it's going to behoove most enterprises to have an architecture that allows them to take advantage of these breakthroughs and not be overly stuck to one particular paradigm, particularly just this early in the evolution of AI.
It's one thing to say, "Okay, I'm going to dedicate a full team to building iOS apps," because you just know, okay, 90% of your employee base is going to use iOS at this point if you're an example enterprise. But with AI, we're just so early.
We don't know which model or which particular paradigm is going to win out at this stage. So, having that flexibility, having that optionality, I think, is going to be very key for customers. That's certainly what our platform is meant to deliver.
Michael Krigsman: We have a question from Shelly Lucas. She's @pisarose on Twitter. She says, "The demand for AI has outpaced chip supply. How much should enterprises invest in AI development when chip production is so limited?"
Aaron Levie: If you're an enterprise, I think right now is the wrong time, let's say, to be doing lots of deep, deep custom model training because, to Shelly's point, we're at this sort of peak of a supply-demand imbalance, which means that there is limited supply for GPUs and for the kind of core infrastructure to train models. We are not at peak demand, but we're peak relative demand to what we've seen in recent years, which means that the costs are going up dramatically. You would be doing things like experimenting or training models at a point where it's extremely expensive to do so because there's so much scarcity of the underlying infrastructure to be able to do that versus maybe in a year or two from now, that cost curve coming down.
I think there's a huge premium on building the layer above the model training, plugging into platforms (like OpenAI or Azure or Anthropic) and being able to use those models to really perfect what you can do within your business to leverage AI. Then as the cost curve comes down on training your own models, I think that starts to become much more realistic and reasonable for enterprises.
Now, there are a lot of different paths companies are taking. I think there's no sort of wrong answer right now. But to Shelly's specific point, I think, right now, I would be spending more time on the abstraction layer of AI as opposed to just pure training of models because of where we are in the cost curve on model training.
Michael Krigsman: Arsalan Khan comes back with another question. He says, "To use AI for decision-making, we need to have the right data and the right algorithms at the right time. Do you advise organizations to do proper data management and to know about their algorithms before using AI or to do it all at the same time?" It's a really thoughtful question.
Aaron Levie: The right answer, academically, is to get your data into a good spot, have it organized well, have the right kind of permissions and controls, and then layer in AI on top of that. That is sort of what the intellectually correct answer is in this kind of case.
The reality is, this space is moving so quickly. Some companies can't afford (or they don't believe they can afford) or there are other business imperatives that are demanding the need to do both of these simultaneously.
The practical answer right now is that a lot of companies are going to have to get moving on both fronts in parallel. But no matter what, you literally can't use AI effectively if your data is not organized properly, in a good position, with the right access controls, in a secure and compliant way. It's just literally impossible.
We've heard examples of a company that says, "Hey, here's an AI interface for interacting with data in my enterprise," they shove all the data kind of close to that AI model, but they forget to have any kind of access controls. Then all of a sudden, somebody does an AI kind of question and they get an answer for something that they're not allowed to see.
Almost by definition, you do have to get your data into a good spot. It has to be organized properly. It has to be securely protected with the right types of access controls. Then AI really comes on top of that.
Michael Krigsman: Whether you do it ahead of time or you do it at the same time, it needs to be done.
Aaron Levie: And it has to be done first, but I do see companies having to say, "You know what? We have to move on this in parallel."
Michael Krigsman: We have a very interesting question from Mike Prest. He's chief information officer at a private equity investment group. This is from LinkedIn, and he says the following: "Elon Musk mentioned this week that he thinks it's very likely that there could be a federal regulatory agency for AI. As a business leader, what are your thoughts on potential regulation for the development of AI?"
Aaron Levie: I'm firmly in the optimistic camp of what AI is going to enable. I think it's going to actually enable more jobs. I think it'll enable more fulfilling jobs. I think it'll make us all more productive. And in my definition, that just means working on the things that humans are better enabled to do and that are more creative and that we get more fulfillment from.
I'm firmly and deeply on the camp of AI as a total net positive almost universally. There will be a couple examples of some roles that need to evolve and some jobs that will need to shift. But for the most part, I think this is great for students and for healthcare and for financial preparedness and for employee productivity. I think it's all pretty net-positive on those fronts.
At the same time, there are some areas of risk. There are areas of risk around security. There are areas of national security risk in the space of AI.
I'm not in the camp that AI is (at any point in the foreseeable future) going to go rogue or start to make decisions on its own in a dangerous way. But I do think there are real risks that we have to pay attention to.
Whether that is the creation of a federal agency just dedicated to AI or within the construct of an existing framework or agency, I think the government is going to have to pay attention to this, is going to have to create some rules of the road. But I would be more in the camp of being (on the margin) more open than closed about that approach.
I think that we should not just have AI in the hands of three or five companies that just happen to be the companies that have all of the scale and can work with all of the regulations. I do think this is a technology that is meant to be and it's important for it to be defused and, to some extent, decentralized with open-source approaches and closed-source approaches and commercial approaches and public-private approaches.
I would definitely be sensitive to over-regulation at this stage in the development of this technology. I'm firmly in the camp that this is not a technology that is equivalent to things like nuclear or some of the maybe technologies that we think about as more on the destructive side. I think this is much more akin to the Internet or the computer chip, and we clearly don't have regulatory bodies that control those things. We have frameworks that each individual industry or part of society has had to kind of mature around those types of technologies.
I'm probably on the margin, more in that kind of camp. But I think our government (in the U.S., at least) is taking a thoughtful approach to this. So far, I've been optimistic from what I've seen.
Michael Krigsman: Since Mike Prest brought up Elon Musk, any thoughts on Elon Musk and how he uses social media, anything at all? [Laughter]
Aaron Levie: [Laughter] That's about a three-hour conversation. I would just say we live in a very surreal world that makes it feel like a simulation sometimes. I'm just watching from afar, but I'll keep on posting online and we'll see where that gets us.
Michael Krigsman: You're watching the simulation from afar.
Aaron Levie: And probably am in it, so I think we don't get too much of a choice of whether we're in the simulation or not.
Michael Krigsman: When it comes to leadership, you've founded Box. It's now a public company, a well-known brand. Has AI changed the way you lead and manage? Very quickly, please.
Aaron Levie: I think we're in the early stages of how AI impacts how we lead and how we manage. I really think it will come down to this idea of having a superintelligence that is right next to us that can help us make better decisions, more informed decisions, get access to more information, and I think that's really the main role and power of AI.
I don't think it's going to fundamentally change the role of the manager or of the work that we're doing. I think it really is the next stage of the human-computer, interaction and relationship, with just even more of a turbocharger of value for us.
Michael Krigsman: What are the Box use cases that inspire you right at this moment?
Aaron Levie: The reason why we're so incredibly excited is, if you think about the past 30 or 40 years since we've had modern database technology, you've always been able to query and synthesize and run reports on and get insights from our structured data. Our CRM data, our ERP data, our analytics data: all of the stuff that goes into a database, we've been able to programmatically operate on forever, essentially.
The challenge is that we can't do the same with our unstructured data: our documents, our marketing assets, our memos, our meeting notes, our plans, our strategy presentations. All of that content generally remains only useful if an actual end-user has opened up the file, is looking at it, and is working on it.
Now with AI, for the first time ever, we can actually leverage all of that information, all of that intelligence, all of the value that's trapped inside that data, and we can ask questions about it. We can synthesize that information. We can summarize it. We can have expertise get applied to it from third-party intelligence that we want to be able to leverage. For us, we see it as a breakthrough in just being able to work with our information.
I'm a lawyer and I want to look at a contract and quickly summarize what are the risky parts of this contract that I should pay attention to. I can now do that in two seconds as opposed to two hours.
I'm working on a press release for a new product, and I need to get some quick ideas. I can now do that, again, in two seconds as opposed to hours and hours of brainstorming or trying to figure out some parts when I'm starting from a cold start.
The ability to use AI to make us more productive, make us more creative, make us more efficient, be able to get new ideas, we believe are going to be a huge amplifying force in how we get our work done.
Michael Krigsman: Those comments, the comments from the AI, the summarization, how can I have confidence that they're accurate? That's a big deal.
Aaron Levie: This is what we spend all of our time on improving the accuracy, reducing the hallucination, providing the right kinds of citations so you know why you saw the answer that you did. Those are the types of things that we're focused on, and we have been able to achieve extremely competitive and good results with our AI solution.
Michael Krigsman: Aaron, as we finish up, given your history and background in this industry – and you have such a broad perspective because of the folks you're talking to – what advice do you have for business leaders today to help them be successful in this AI world?
Aaron Levie: Testing out use cases. Seeing where this can have an impact on your business. I think it's very, very hard for any executive or group of executives to get around the table and come up with all the ideas where AI is going to have an impact on their business.
I think it's really important to have your teams, have generally different parts of the workforce (sales, marketing, HR, R&D teams), come up with ideas. And then have an architecture that allows you to support many of those ideas without a fragmented approach to how you're going to manage the technology or how you're going to stay secure.
I do think diving in, iterating, testing, but being safe about it is incredibly important.
Michael Krigsman: Okay. With that, we are out of time. Just a huge thank you to Aaron Levie. He is the co-founder, CEO, and chairman of Box. Aaron, thank you for coming back again to CXOTalk.
Aaron Levie: Thank you.
Michael Krigsman: We really appreciate that.
Everybody, thank you for watching, especially those folks who ask such great questions. You guys are an amazing audience.
Now before you go, please subscribe to our newsletter. Subscribe to our YouTube channel. Check out CXOTalk.com. We have amazing shows coming up.
Thanks so much, everybody, and I hope you have a great day.
Published Date: Sep 15, 2023
Author: Michael Krigsman
Episode ID: 805