Inside AMD’s AI Strategy with EVP and CTO Mark Papermaster

AMD's EVP and Chief Technology Officer joins CXOTalk episode 884 to share the company's AI strategy and offer executive advice on enterprise AI.

55:10

Jun 20, 2025
6,446 Views

In CXOTalk episode 884, Mark Papermaster, Executive Vice President and Chief Technology Officer of AMD, explains the company’s AI strategy: delivering an open, full-stack hardware portfolio complemented by a rapidly expanding AI-software ecosystem. AMD wants to enable enterprises and cloud providers to deploy large-scale AI solutions efficiently.

Papermaster details AMD’s partnerships with large-language-model developers, the influence of supply-chain geopolitics on semiconductor strategy, and the company’s work to reduce the growing power and cooling demands of AI infrastructure. He also shares AMD’s technology roadmap, the decision framework behind multibillion-dollar architectural bets, and practical guidance for enterprise buyers adopting next-generation AI.

Key Topics:

  • Building a Full-Stack AI Platform – why AMD is bundling hardware, software, and solutions
  • Competing for Data-Center AI – AMD’s strategy to gain share in GPU acceleration
  • Designing for Reasoning Models – how massive context windows shape chip architecture
  • Sustainability at Scale – tackling energy consumption and cooling in AI deployments\
  • Balancing R&D and Product Cycles – fostering innovation while hitting ship dates
  • Executive Takeaways – what leaders should do now to prepare for AI’s next wave

Key Takeaways

Align Hardware and Software Development

  • Treat software as an equal partner to hardware to maximize performance in the AI era. This approach ensures compute engines are fully optimized for emerging AI algorithms.
  • Foster deep collaboration between hardware and software teams to drive innovation. A tight partnership enables your company to develop a competitive roadmap and establish a strong position among industry leaders.

Cultivate an Agile and Execution-Focused Culture

  • Prioritize speed and adaptability to keep pace with the rapid rate of industry innovation. Your organization must be prepared to change direction quickly in response to new algorithms and customer needs.
  • Build a culture centered on execution and dependability to earn customer trust. Consistently deliver on your product roadmap with high quality to become a reliable partner.

Adopt a Hybrid and Specialized AI Strategy

  • A one-size-fits-all approach to AI infrastructure is insufficient for modern enterprise needs. Implement a hybrid model, using the cloud for large-scale training and on-premises systems for tailored, low-latency tasks.
  • Invest in smaller, domain-specific AI models to increase efficiency and reduce costs for point applications. These tailored models apply proprietary data to solve specific business problems more economically.

Champion Open Ecosystems to Drive Competition

  • Support and contribute to open standards and ecosystems to avoid vendor lock-in and foster innovation. A non-proprietary approach provides customers with greater choice and flexibility.
  • Collaborate with competitors on open initiatives, such as interconnect standards, to build a healthier market. This strategy encourages broader industry participation and creates more robust solutions for all.

Prioritize Customer Listening and Clear Communication

  • Dedicate significant leadership time to listening to customers and understanding their core challenges. This direct feedback is essential for ensuring your product portfolio addresses real-world problems.
  • Hone your communication skills to align teams and articulate your value proposition clearly. Effective two-way communication, both internal and external, is paramount for developing a winning strategy.

Episode Participants

Mark Papermaster is Chief Technology Officer and Executive Vice President, responsible for driving the company’s end-to-end technology vision, strategy, and product roadmap. He led the re-design of engineering processes at AMD and the development of the award-winning “Zen” high-performance x86 CPU family, high-performance GPUs, and the company’s modular design approach, Infinity Architecture.

Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator known for his deep business transformation, innovation, and leadership expertise. He has presented at industry events worldwide and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.

Transcript

AMD's Journey and AI Evolution

Michael Krigsman: AI infrastructure is a critical and rapidly evolving part of the artificial intelligence landscape. I'm Michael Krigsman, and today on CXOTalk episode 884, we're exploring this topic with Mark Papermaster, Chief Technology Officer and Executive Vice President at AMD. Mark oversees an extensive portfolio of hardware and software. Let's get into it.

Mark Papermaster: I've been very fortunate because I've had the chance for the last 13 and a half years to be the CTO of AMD during an incredible inflection point in the industry. Myself and Lisa Su were recruited at that time because AMD was a storied Silicon Valley company. It's over 55 years old and had such promise, but was facing challenges at that time.

It was an opportunity to jump in right as AI was starting to come out. Think about 2012, when we were starting to first get natural language processing where it could have a much higher accuracy rate than other techniques. It was an effective use of neural net engines and the first promise of what was yet to come.

The timing was perfect. AMD had great building blocks: CPU technology, GPU technology, other accelerators, and the know-how to put them together to address specific markets. This includes everything from supercomputers down to PCs and embedded devices. It's been a phenomenal opportunity. There's great talent at AMD, and it's been a dream job.

Michael Krigsman: The technology world is going through another inflection point today with generative AI. What does that mean for AMD?

Mark Papermaster: The real inflection point was indeed generative AI. We saw the promise coming of what could be, and that's why we started investing in making sure our technologies would be there, both hardware and software.

But generative AI was such a fundamental inflection, Michael, because it made AI accessible to the masses. It started, of course, with ChatGPT. Think about when in November 2022, ChatGPT came out, and suddenly, it's a conversation. You're putting in tokens or questions into this ChatGPT, and it's accessing supercomputing to give you answers that you never thought of getting that type of intelligence out of a computer before. It really opened up AI and the supercomputing underneath it to the masses.

What have we seen? It's hard to believe it's just been several years since then, with such an explosion of capability, more and more accuracy from those large language models like ChatGPT, Grok, and the rest of the models there. But you're also seeing the shift to inference, where now people have really started to realize how you can deploy AI and fundamentally change most every process that we've dreamed of.

AMD's Strategy in the AI Era

Michael Krigsman: This shift to inference, this expectation of accuracy that you were just describing, what does that do to AMD in terms of your technology and your strategy?

Mark Papermaster: The need for more accuracy and the shift of inferencing, meaning millions and millions of more users, is stunning. Already ChatGPT is running 400 to 500 million user interactions per week.

For us and our peers in the industry, it means that the need for the computation to support that is growing exponentially. I call it this insatiable demand for more computing. It needs to be efficient because you can't just throw more and more engines at it; it would burn more power than we have available.

It takes a lot of innovation, Michael, to be able to drive forward that kind of computing capability in a smart way. It really takes understanding how those software algorithms work with the hardware because, otherwise, you can't truly optimize.

Michael Krigsman: Your strategy then involves both the hardware and the software interaction. You have broadened the portfolio, could we say, from your traditional business, which was more narrowly focused on the processors?

Mark Papermaster: We've always been a hardware and a software company, but we were capital H Hardware and small s software for years. Software was very important but always an enabler. Now, in this AI era, the software is equal or more important than the hardware because we can't let off at all on hardware. As I said, those engines have to be more and more efficient at every generation.

To unlock that power, you have to understand where the AI algorithms are going. What are the techniques that could be used to make AI more efficient? Can we use different matrix math methods to make it more efficient? Are there algorithms? Think about the change when the transformer was introduced into the AI model and flash attention. These are all techniques in the algorithms to allow AI to be more accurate and more efficient, and the hardware has to match up with that.

It takes a very tight collaboration of hardware and software, and that's what makes it hard to bring competition into the market in AI. I'll tell you, Michael, for AMD, we had to earn our way to get a seat at the table with these largest of companies that are creating the cutting-edge large language models and new advancements in AI. We had to prove that we're that player. We can bring that competition, and that got us access with their developers. That allows us to make sure that our roadmap is absolutely competitive in AI going forward.

Michael Krigsman: What does winning for AMD mean in this market today, as you've been describing it?

Mark Papermaster: We are very tight in our communications with our CEO leadership, and her communication is always driving us to be the very best that we can. When we enter a market, we intend to rapidly gain share. We want to be a leader in that market.

The analogy I'll give you, Michael, is what we did in CPU servers. When Lisa and I joined, we had to exit the server market because we didn't have a sufficiently competitive x86 CPU. Our strategy was to get a leadership CPU. Let's go win that market because it's clear that so many workloads were moving to the cloud and moving to the data center.

We did exactly that. It doesn't happen overnight when you do chip design. It took literally five years to get that new leadership CPU. It's called our Zen family of processors. But look at what's happened. In the server market, we went from virtually zero share in 2017 when we launched that, to roughly 40% market share right now, versus the incumbent leader, Intel, that had that dominant share.

That's exactly what we want to do with GPUs. CPUs and GPUs work very closely together. That's what we want to do in AI, and that's what we've already started. When we launched our first AI-oriented processor, an AI-optimized processor called the Instinct MI300 in December of 2023, for its first year of production in 2024, we went from virtually zero revenue in these data center GPUs to $5 billion in one year. It was the fastest product ramp ever at AMD by far. That was zero share to about 5%. It's a huge market, but we won't let up.

That CPU journey I described took generation after generation of listening to customers, putting out products, hearing, having that seat at the table, and then improving the product hardware and software every generation. That is exactly what we're doing for the data center GPU. It's what we're doing across our portfolio because our whole portfolio, from PCs to embedded devices to gaming graphics, is all AI-accelerated.

Michael Krigsman: Go to cxotalk.com. Subscribe to our newsletter. We have incredible shows coming up, so check them out.

Building Expertise and Collaborating with AI Leaders

Michael Krigsman: How do you work with the model makers, with the OpenAIs and others in this world, to optimize what you're doing against what they ultimately need?

Mark Papermaster: We had to beef up our skills in this area. You can't show up at an OpenAI or a Meta, folks that are absolutely steeped in the fundamentals down to all of the details of what it takes to create optimized large language models, handling these massive tasks that generative AI takes on. Think about the billions to trillions of parameters that are going into these large language models, Michael.

What we had to do, and what we did, was to mode match. We took our brightest software leaders and brought them to the table. You don't come in with a marketing team. You don't come in with a waving of hands. You come in with your deepest and steepest of technical experts. Then we grew that software expertise through organic hiring and also acquisitions.

At this point, we have very skilled teams that can sit down and really go through and understand where the bottlenecks are. How can we, with the hardware and software that we provide, enable across those CPUs, GPUs, and now racks? You have to build that up into rack-scale expansion to be able to handle the training and inferences for these largest LLMs.

That is a muscle that we have built up over the last two years. We had to do it very quickly, and we have to be incredibly agile. If there's one constant as you work with companies like OpenAI and Meta and the rest, it's change. We'll sit down with them and they'll say, "Well, guess what?"

Agility and Cultural Transformation at AMD

Mark Papermaster: "We're going down this path. We found a better way. There's a better algorithm, and here's what we need to do differently." We were always an agile company, always able to be quick on our feet. That's one of the stalwarts of our AMD culture. We put that to the test as we work with those AI model companies. Luckily, we're good at it. We react quickly, adapt, and meet their needs.

Michael Krigsman: Folks, right now, you can ask your questions of Mark Papermaster. If you're watching on LinkedIn, just pop your question into the chat. If you're watching on Twitter/X, use the hashtag #cxotalk and get your questions answered.

Mark, is there a cultural dimension to this change? You mentioned developing a new set of muscles. Is there a cultural change that has to go on when you go through this type of inflection?

Mark Papermaster: We went through a cultural change at AMD first, overall. That's been the fuel of the whole turnaround. If you look at AMD over the last 10 years, we've grown dramatically, Michael, and I attribute our culture as a big piece of that. The change in culture was to really focus, first of all, on execution.

When you put out a product roadmap, talk to customers, and listen to them, you build features that you know they need, that you know can differentiate your product and make it better than the competition. You have to deliver that when you said you would, with quality, and become that dependable supplier. That has been a maniacal focus. When I got here, I started working with the rest of the engineering team on re-engineering our engineering processes so we could be that repeatable engine of getting out new leadership, innovative technology cycle after cycle.

When Lisa stepped up, she was running all of our businesses in AMD, and in late 2014 she became CEO of the company. She brought this amazing focus across the entire company on listening to customers and delivering products that really make a difference. That's that execution engine.

Thirdly, just simplicity. Let's not be a company that's caught up in complexity, but really simplify how we do things to make sure that we're the most efficient that we can be. That was a change from a company standpoint. Then as AI, this explosion of AI, as you say, back to the release of the first generative AI and now what we've seen is such a massive inflection in the industry that it's still going. That's causing yet again new muscle in the company.

That new muscle is, one, like what I said earlier, being as much a software company, if not more, than we are a hardware company. That's been a change. Secondly, it is just the speed at which we move. I have been in this industry for four-plus decades, and I have never moved at the velocity that we are now. As I look across our whole company, we are moving at a faster velocity than we ever have. When I look across the entire industry, I see everyone moving at a faster pace. AI is accelerating the rate and pace at which innovation is delivered to the market.

Navigating AI Infrastructure Challenges

Michael Krigsman: We have a question coming in from LinkedIn. Again, folks, when else will you have the chance to ask Mark Papermaster, the CTO of AMD, pretty much whatever you want? Take advantage of it.

This question is from Preethi Narayan, who says, "As the cost of running large-scale AI models in public cloud environments continues to rise, many enterprises are re-evaluating high-performance on-prem or hybrid infrastructure options. Yet bare-metal deployments bring their own challenges in terms of complexity and maintenance. From AMD's perspective, how should CIOs and CTOs navigate this trade-off, and are we approaching a point where hybrid AI infrastructure becomes the strategic norm?"

Mark Papermaster: One size doesn't fit all. We think about that with our product portfolio. We are going to offer choice. We offer a broad ecosystem so customers can really have choice, not a bespoke, "Here's your only AI solution. Here's a rack that our competitor's putting out there and not giving you the ability to really tailor it as you need." We work with OEMs, and we work with hyperscalers.

What you're starting to see is a bit of a dichotomy. You're getting massive rack-scale designs in the hyperscalers that are supporting these largest of large language models. Where you have significant training needs or large-scale inferencing that you're doing, you're probably going to continue to run those on the cloud. By the way, because of the efficiency that we're driving in the industry, the cost per token is going down. The cost of the computing is going up because we're adding more capabilities, but the actual cost per token is going down.

It's still an expensive bill because all of industry is bringing on more and more users as people are starting to deploy AI. They're running AI inferencing in most every process that they have. Consumers are using more and more AI in their daily lives, so that demand is going up.

How are businesses thinking about that? I do see them moving to a hybrid model. I see them, as I said a moment ago, using the cloud for those big tasks that they have, but they're starting to tailor and fine-tune models to their business. They're harnessing the data they have. You're not needing the LLM that you can ask anything. You're creating your own large language models and, in some cases, small language models tailored to more point tasks. That's allowing businesses to develop and support them more economically on-prem, and with lower latency. It's faster because it's right there at the point of the factory floor or the point where the users are seeking those immediate answers.

I do see a hybrid approach, a dichotomy. There's a third leg of that, and that's the embedded devices. You have on-prem, what you're running in the cloud, and thirdly, the edge for AI is really growing. That's where you're literally embedding the AI engines at the point of data acquisition so that it's immediately providing smarts to the process that's being controlled.

The Future of AI: Tailored and Efficient Models

Michael Krigsman: Other guests on CXOTalk have also said that they believe that the future is smaller but more tightly focused models for particular domains and applications.

Mark Papermaster: It's obvious, right? When you step back and think about it, as you start deploying AI, it will get tailored. It will become more efficient.

AMD's AI and Product Innovations

Mark Papermaster: People have to drive the cost down for point applications. We saw this coming at AMD quite some time ago, and we leveraged the fact that we have that breadth of portfolio across everything from supercomputers. We have the number one and number two supercomputers in the world, running on AMD CPUs and GPUs. We're growing in the data center with both CPUs and GPUs, but also smart PCs, Michael.

We were the first to introduce deep AI acceleration for copilots in the PC, and that's been a really growing piece of our portfolio. We also just introduced with our latest version of gaming graphics, leveraging AI there for beautiful upscaling in graphics and leveraging the embedded AI. Then think about our acquisition of Xilinx in 2022, a leader in embedded devices and programmable gate arrays, along with the embedded x86 business that we have, all of it AI-accelerated. We are absolutely seeing this coming and expect to see a continued growth of AI deployments as people understand how to harness their data and bring effective inference applications.

AMD's Internal AI Adoption and Development Process

Michael Krigsman: Let's take some more questions. We have some from Twitter, from X, and this is from Ricardo De Anda. He says, "How does AMD's internal IT organization act as customer zero for your AI solutions? Are there examples you can share where internal adoption directly influenced product development?"

Mark Papermaster: We're a technology company. Shame on us if we didn't make ourselves customer zero, but we have been doing this for the last years. Starting about four years ago, we raised the visibility of IT significantly in being customer zero. Our CIO, Hasmukh, is directly engaged. He'll engage with customers, understand their needs, and share with them how we are deploying AI. He's also supporting our engineering teams as they're running our compute systems. We don't think about bringing out a new data center GPU, a new data center CPU, or a new PC to market until our IT has already exercised or deployed it first to the compute users at AMD.

We build clusters of data center computation first. We do deployment of hundreds of PCs before it goes out to the market, and likewise across our embedded applications. It's a little bit harder on the embedded applications, but it's not hard at all for PCs and our data center computing products to be customer zero, and that's exactly what we do.

I'll add, we've sped our chip design process up. That's what we do. We develop chip designs, and the hardware and software have to work effectively. What do we do? In the chip design process, we are using AI. Just think about it. There are billions of transistors. Our latest AI chip has 154 billion transistors. Our newest one now has over 180 billion transistors. How do you get all those transistors laid out across the silicon with the vast wave of interconnect that you have to put this together? It all has to be perfect. You can't have one transistor or one connection in that transistor that doesn't work.

AI, it turns out, is very effective at helping ensure that we not only have the most optimum implementation, but it's helping our test processes and getting the coverage to make sure that no defects can escape through our manufacturing test line. It's been very effective not just in engineering but even in our business practices. Being customer zero also brings direct benefits as you speed the internal AI application's success and efficiency.

Michael Krigsman: Let's grab another question.

AMD's Technology Strategy and Sustainability Goals

Michael Krigsman: You can see the audience is an incredible audience and really smart, so I always defer to their questions in front of my own. This is from Lisbeth Shaw, who says, "What's the technology strategy? Is it thinking of all the product lines or families as one system distributed over different domains?"

Mark Papermaster: When you think about a product portfolio like we have at AMD, you have to be flexible in how you chart the direction. What do I mean by that? Each product itself has to stand alone in its own category. Our x86 Zen line of CPUs has to be the best x86 CPUs that are out there. Same thing with our GPUs; they have to be the best engines that are out there. The same thing applies across our gaming graphics and our embedded devices. You have to think about all the building blocks.

Our roadmap has to make sure our hardware and software, the software enablement, are best of breed. Then, when you put it together, we have a strategy of how they are deployed and how we tailor that to products that meet customer needs. How many CPU cores do they need? How do you need to tailor it when it's being used for database processing versus when that same CPU is being used as a head node, a controller for a vast number of GPUs that it's managing? Two totally different use cases. That applies across each of our portfolios. You have to think through the use case, the dominant use cases of our customers, and make sure you're optimized.

Then there's yet a third step, and that is we have to think across our whole portfolio, how can we get synergy? For AI, we have one software enablement stack, ROCm. That's the name of our software enablement. That is going to be the top level of making it easy to deploy AI across whether it be our data center GPUs, our CPUs, our gaming graphic devices, or our embedded devices. That's an area that we're incredibly focused on right now in 2025, really opening up that capability to users. You can now run ROCm on AMD-based Windows PCs. You can now run ROCm on the latest of the Radeon graphics cards and have it optimized. Even CPUs are an essential part of AI processing, and we have them well-enabled as well.

Michael Krigsman: This is from Chris Peterson, who asks, "On the technology side, where does AMD see the GPU-to-GPU interconnects going? On the business side, what's AMD's plan if the AI industry pivots massively in algorithm design or the power, water, and sustainability just can't keep up with demand?" So, two questions. Very quickly, GPU-to-GPU interconnects and then the larger set of business issues.

Mark Papermaster: We have previously gone about that with using our proprietary Infinity Architecture, which has been proven through our generations of CPUs and GPUs to date. But we have a commitment in AMD to an open ecosystem. We think it's extremely important that you don't just have a few bespoke offerings because it's a walled garden approach. We support and we're a founding member of the Ultra Acceleration Link.

What Ultra Acceleration Link is, we donated the kind of protocol we use to connect our GPUs. Now it's out there, not in AMD control, but controlled with multiple companies that are running the UA-Link consortia. They are ensuring that other switch vendors can play. Other competitors of ours that are creating their own accelerators can use this Ultra Acceleration Link standard and use the same switches and connectivity solutions that are out there. We are committed to an open ecosystem, and that is our strategy for GPU-to-GPU links.

The second question is, of course, critically important. What do we do as the demand for energy consumption goes up and up and up to accommodate our insatiable demand for more AI computation? My short answer is innovation. The demand is just so high. You look at the hundreds of billion-dollar market opportunity we're looking at in the next few years. By 2028, the market size is going to be that large. That's an incredible pull for innovation.

When you have a market that large, it means we're going to continue to innovate. How do we make these GPUs more efficient? How do you make that GPU and CPU work more efficiently together? But even more than that, how do you support algorithms that aren't changing so quickly, so you don't need the programmability of that GPU and CPU? We support as well tailored devices and custom devices that our customers could work with us on. I think we're going to see that whole range.

I'll just say, to show our commitment, we just hit a milestone for CPU and GPU computation for these most demanding AI and supercomputing workloads. We set out in 2020 that we would drive a 30X improvement by 2025. In a five-year period, from 2020 to 2025, we'd have a 30X improvement in performance-per-watt energy efficiency. We declared just a couple of weeks ago, with the advent of our new MI350 series, we not only hit that 30X improvement in efficiency, we actually hit 38X.

Energy Efficiency and Technological Milestones

Mark Papermaster: We put out measurable milestones to incent our own team to be efficient. We work with professors and universities to make sure it's not marketing fluff. It's real efficiency that's going to be delivered at the end of the day. Now we're onto our next milestone. Sam Naffziger, who leads this for us, has just put together, again, working with the community, working with professors so we can measure it, a rack efficiency gain. As I told you, these now scale at a rack level of how you put these AI compute clusters together. We've committed to a 20X improvement at the rack level by 2030. That's our next energy efficiency milestone that we're focused on.

Michael Krigsman: You've spoken about significant changes in potential, or the innovation necessary to manage potential significant changes in algorithms. You've spoken about the need for sustainability. Are there other big issues that you have on your radar right now that you're focused on?

Holistic Design and Competition in AI Technology

Mark Papermaster: We don't have a choice, Michael. We have to look at the entire landscape. We worry every day, first on disruptive technologies that are coming. We have a great research team. They're constantly looking ahead at what's coming in terms of our computation and how we can interconnect that computation to be more efficient. Our networking technologies, we've got very innovative networking through our acquisitions of Xilinx and Pensando. It's super technology for how we can network these AI computations. Now with our acquisition of ZT Systems, the landscape includes the rack design and driving that innovation.

We call it holistic design. It has to be cross-disciplinary of all of these groups, hardware and software working together to not only design the next generation, but look one and two generations beyond that. What's disruptive? What's coming? We're partnering with others on quantum computing. I won't get in the debate as to when quantum goes mainstream. Quantum will start off as an accelerator. It'll be bespoke applications that can really benefit from quantum. But we're going to be there. Already our Xilinx-based FPGAs in our portfolio are used as controllers in most every quantum computer out there today. We also work with the rest of our portfolio to be that CPU and GPU complex that can work with those quantum accelerators. It is just an example, but it's really the entire landscape, Michael, that we have to be looking out, again, not just at today's products but the next generation and the generation beyond.

Michael Krigsman: Again from Chris Peterson, "AMD, NVIDIA, and others seem to be competing relatively head-to-head in terms of overall design for AI. What about plans to compete with niche players like Cerebras or others that have wildly different solutions?" I'm interested in both your competition with NVIDIA as well as with the other niche players that Chris Peterson mentions.

Mark Papermaster: We are a mass producer of computing technology, and it doesn't behoove us to be the first to market in a niche application. Cerebras, I give that team a lot of credit. Andrew Feldman and the team do a great job of wafer-scale integration. There are certain tasks that when it fits in that model and can leverage that data flow processing through a wafer scale, it's certainly going to have advantages. But it is specific workloads that can really benefit there.

They're doing well in that. We'll watch that. If the workloads that can fit in that kind of application grow, and I could say the same about any other startups that are focused in certain areas, we watch it closely. If we hear from our customers that that's gaining traction, that more and more models could leverage that, we're going to bake those approaches in our plan. But we have to make sure that we're listening as well, given the role AMD plays in the industry. We're really looking and making sure that we're one and two steps ahead of mass-scale AI computation.

Leadership, Communication, and AI Integration

Michael Krigsman: Two of the core themes that you have mentioned are execution and, to simplify it, keeping your ear to the ground on what's happening in the industry and where things are going. Do I have that right?

Mark Papermaster: Absolutely. I wouldn't manage my day-to-day any different than I do right now and that I have in the entire time I've been in the CTO role at AMD, and that is to prioritize having that ear to the ground. I prioritize customer visits, and I fly to customers or set up virtual meetings like you and I are having today. I very much make sure I'm in listen mode. I don't go in and just beat my chest and say, "You should be using AMD. Here's how we can give you the better total cost of ownership. We can help save you money." I do mention that. I weave that in for sure. But that's not the reason I'm making that call. I'm first understanding what challenge they are facing, and I want to make sure that our portfolio can address that. Then I can educate them and show them where we can bring them an advantage.

Likewise, we follow the competition. I run a regular process within AMD, which is our competitive review. We look at any announcements that our competitors are making, because we don't ever want to be arrogant. We can never have an attitude at AMD that, "Oh, we're just better than everyone else. We have the right way. What other people are doing is wrong." It's the converse. We look, and we are constantly asking ourselves, do we have that best practice? If there's a better approach someone else had, let's top that yet again. Let's use that to spur our innovation on to make sure that we don't find ourselves in any way disadvantaged.

Michael Krigsman: On this topic, Christine Lofgren from LinkedIn asks about leadership practices that enable you to be responsive, to innovate, and do the things that you're describing.

Mark Papermaster: We often look past how important fundamental communication skills are. You think that sounds trite. Of course, you have to communicate well. I will tell you, it's paramount. I have spent four decades trying to hone my communication skills. Communication means two-way. You think you're listening. Are you really listening? Did you play it back to the person who was talking to you to make sure you got it right?

That applies externally and internally. It also is just how you communicate your north star to the team. Is the whole engineering team aligned on the priorities that we have and how we achieve those goals? That takes an investment and excellent communications. With our customers, are we really articulating our value proposition and what it is we're about in AMD and how we can deliver them value? Across the panacea, internal and external, I think communication is vitally important.

It has to be married with a sound strategy to win. If you take the time to really develop a strategy that will make it clear for all the investments we make how we're differentiated, how we bring value, how we're going to win, and you marry that with excellent communication, that's how you win.

Michael Krigsman: Do you have any quick advice on communicating well for business executives? It seems like an obvious topic, but you've just emphasized the deep importance of it.

Mark Papermaster: You have to put the time into it. It comes easy to me now because I've been doing this for decades, but it didn't always come easy. I invested the time to hone my skills, which means you have to put yourself in uncomfortable situations. Force yourself to be where you are not comfortable. That can be internal communications or external communications. Get out of that zone that you've been living in, stretch yourself, and get feedback.

Lisa Su always says, "Feedback's a gift." It really is. Put your thick skin on. Get feedback from those that you know will tell you the truth, not butter you up and tell you, "Oh, yeah, that was great." Get unfiltered feedback.

Lastly, in this age of AI, use AI. I use AI every single day to help me and give me information to have better decisions, but also in communication. It proofreads what I did, what was not clear, what the tone of that communication I just wrote was. It's that right-hand person that is, by the way, getting better and better every generation, that's there to help you.

Michael Krigsman: Isn't that amazing that you, given your role as CTO of AMD, just referred to it as a "they," essentially?

Mark Papermaster: Why? Because you use it that way. When you're deploying AI, you're thinking, "I need help. Do I get this person to help me, that person to help me? Or do I want the AI to help me?" That is how I think about it.

Reasoning Models and Computational Challenges

Mark Papermaster: I think that's how most people are starting to think about it.

Michael Krigsman: I use numerous models every single day. Absolutely, that's certainly how I think about it.

Mark Papermaster: Michael, have you used some of the new reasoning approaches that are out there and the research inferencing capabilities that are out there? Have you tried any of those capabilities? You do?

Michael Krigsman: Oh, yes, all the time. Now one of the hardest challenges I face is figuring out what's the right model to use for the right task. If you use the wrong reasoning model, you'll end up sitting there for 20 minutes while it gives you back the wrong answer.

Mark Papermaster: Absolutely. How you deploy, how you prompt, and how you iterate is so important. You asked me a question earlier about how the algorithmic change affects AMD. Now that we're hitting reasoning, as well as agentic processes that spawn off many, many tasks, I can tell you it had a huge change in terms of how our computation engines are deployed.

When you go through reasoning, it means that you're going through multiple loops of inference. You're doing the forward chaining of your machine learning processing again and again to get more accuracy, but primarily also bigger context windows. You're building on the fact that you ran an inferencing loop and got these kinds of answers. You want to remember that, but you want to build on that, and you want to use maybe a different expert or a different kind of model. It turns out that's like a database that you're having to manage of all that context. It needs great CPUs, which we have at AMD. What we're seeing is more and more, having that strong CPU paired with the GPU to be effective on these new modes of inferencing with reasoning and research.

Likewise, with agentic AI, you can code it up to handle a number of tasks. It can go out and get to APIs, application programming interfaces, and spawn other tasks. What we're finding is those are often on CPUs. It's really driving a different mix in the computation for AI.

Michael Krigsman: Are you helping guide your software technology partners, the model makers, in terms of best practices for offloading certain kinds of tasks to the CPU versus the GPU and when to go back and forth and so forth?

Mark Papermaster: When you think about that interaction that we have with model developers, they're the deep experts of the model itself. We're the deep experts of that communication tuning across the different compute devices and how to mode match that with the algorithms. It's very much a give-and-take, a brainstorm that is pretty incredible to see.

Michael Krigsman: We have a question from LinkedIn, changing gears entirely here, from Dr. Ankur Upadhyay.

AMD's Global Expansion and AI Strategy

Michael Krigsman: He says, "What are AMD's plans for future expansion? Any plans with Southeast Asia?"

Mark Papermaster: We are constantly expanding. We have grown across Asia. We've been in Malaysia and Singapore for many, many decades. I think we've been in Malaysia for 50 years. We continue to expand worldwide, including Southeast Asia. In India, we're at 8,000 employees today, 8,000 engineers, and we're on a pace to be at 10,000 by 2028. Across Taiwan, China, we look at other geos there. It's not just there. We're equally expanding in Europe. We are an example of a global, multinational company. We serve a global market, and our engineering force and our salesforce are absolutely global.

Michael Krigsman: This is from Vishal Bhargava on LinkedIn, and he says, "Which research and inferencing products are we talking about? Can you please share specific names?"

Mark Papermaster: When I'm talking about research products, I'm talking about literally OpenAI Research. You can get a research subscription with OpenAI. It costs more, but you truly have that researching capability I described. You can get similar functions with Anthropic and other models that are out there.

Michael Krigsman: Ricardo de Anda on Twitter/X comes back again, and he says, "What are the biggest challenges AMD faces in managing machine identities across hybrid and multi-cloud environments, especially with the rise of AI-driven automation?"

Mark Papermaster: Identity is a huge focus across computing. We do an authentication every time we talk to any other computer to make sure that we know it's a valid, entrusted compute device. I would probably need a little bit more context to answer your question the best.

Michael Krigsman: How do geopolitical considerations and the drive for supply chain resilience directly influence AMD's AI product strategy, R&D investments, and global manufacturing partnerships? I'm not trying to make this a political conversation.

Mark Papermaster: Our supply chain, like everyone in the industry, has had to become very agile. As there are tariffs, which apply to certain products and certain sourcing locations, we need to have the agility to maneuver as best we can to mitigate those impacts. Otherwise, the cost of our products would go up. We do that. We have a fantastic supply chain team. Keyvan and his group are very flexible. Again, we're a global company, and we have global manufacturing. We've been agile. Just like I said, we have to be agile to adjust our products, listen to our customers, and optimize them for computation. It turns out, in the current geopolitical environment, our supply chain has to be equally agile.

Michael Krigsman: On LinkedIn, Christine Lofgren comes back and says, "What key trends or developments both within AMD and the wider tech industry do you expect to shape AI's trajectory over the next 12 months?"

Future Trends in AI and Agentic AI Applications

Mark Papermaster: One is that there's more and more inferencing. You're just getting such innovation on models. Already, I'm so impressed with some startups or some businesses who've been able to create small language models that are really pointed at tasks. I think we're just at the absolute beginning, the tip of the iceberg of tailored models that bring innovations for unique tasks that can be incredibly efficient. I think model development, both tailored and small language models, will scale the accuracy of the large language models as hyperscalers are driving to artificial general intelligence. I think we're going to see just continued innovation across that spectrum.

Likewise, the base technology itself, how we put it together, you're just going to see tremendous innovation to make it more power-efficient across everything from supercomputing to the smallest devices. There is tremendous innovation going on in this area, everything from materials to new types of transistors, how the memory is connected more closely and more efficiently to the compute devices, networking, and on and on.

Michael Krigsman: Mark, you mentioned artificial general intelligence, AGI. You're in a very unique position working with these model developers. Do you have thoughts on the trajectory of the models going forward, say over the next year? Where do you think the world will end up, say, a year from now or nine months from now?

Mark Papermaster: We can see nine months. I don't know that we can go much beyond that, nine months to a year. But when you look at that timeframe, you're seeing a continuation of more and more accuracy coming out of these models, larger context being able to be handled, and much more thoughtful answers, particularly like I talked about earlier with these reasoning techniques that are being applied. That trend will absolutely continue over the next year, as well as more and more compute capability needed to support that.

The tough question is five years from now. Where are we five years from now? What actually is that definition of AGI? Do we hit it five years from now? There is lots of debate out there, and I'll leave that to others to prognosticate what is that definition of AGI and the precise date in which we hit it.

Michael Krigsman: What about Agentic AI that you've mentioned several times? It seems like that is such an important architecture, if we can call it that, and it's not going away anytime soon. Do you have thoughts on that?

Mark Papermaster: I talked to a CIO of a major bank, and she was describing to me the progress they've made in agentic AI. They got almost the entire company enrolled in deploying AI. Every week they're finding point applications that can just be sped up by creating an agentic AI. Think about that.

The Role of Agentic AI in Enhancing Efficiency

Mark Papermaster: That's a ripe area. Banks handle transaction processing. So much of their time is spent on well-defined tasks and well-defined data. If you can create an agentic AI that has clear goals, clear sets of data, and clearly defined APIs to connect different tasks together, it's going to drive a very quick hit of efficiency. I do think we're going to start seeing that more and more. We're certainly doing that internal to AMD, looking for processes where agentic AI can speed up our activity.

It leads, Michael, to your question. I'll keep it very short. It's agentic AI that I would urge people to think about. Where can you make people really more productive? Do people want to be spending their time where it is a process that agentic AI could do for them? Lean in. Get that agentic AI to complete those tasks to free up the time for the innovation, the creativity, the really human-driven opportunities that we have in front of us as a society.

Michael Krigsman: That is going to take some time to diffuse, that kind of knowledge and the ability to break down processes. It's one thing for folks like yourself who are really advanced technologists, who understand the link between the technology and the business process and being able to decompose. But there's a maturing process across industry that will need to take place.

Mark Papermaster: Absolutely. It's cultural as well as technical. Therefore, it does take some time. People have to get their heads around it. They have to build the technical acumen to be able to deploy. But I think it's going to happen much more quickly than we ever anticipated.

Advice for CIOs and CTOs in the AI Era

Michael Krigsman: Mark, as we finish up, what is your advice for CIOs and CTOs inside large organizations in the enterprise at this point in time today? Based on your talks with lots of customers, what should they be really focused on?

Mark Papermaster: One, I'd say if you're not moving more quickly than you ever have before, something's wrong. The pace of change has really inflected over the last several years and will continue in the period that any of us could project. One is, if you're not comfortable embracing change, you have to adapt. I don't mean change without managed risk. One of the things that we drove in our AMD culture is to take the risks but manage them. Make sure you have checks and balances. Don't walk off the edge of a cliff without a rope attached to keep you from falling in the chasm.

I think that's one. Be receptive to change but manage it to ensure that you're controlling your risks. The second I would say that comes with that is stepping back and examining what your core value prop is, right? What are the key things you're doing? In an AI era, can you really re-architect how you do things? Are there fundamentals and processes by which you run your business that can be re-architected in the AI era?

Those would be the two biggest pieces of advice I would share with CIOs and heads of infrastructure out there. I would also say, help AMD in our quest to create a competitive environment where there's an open ecosystem and competition out there, not one dominant supplier. I think that isn't good for everyone.

Closing Remarks and Future Outlook

Michael Krigsman: Personally, I want faster chips so I can render video and do all the things I do faster and faster and faster, and with not quite so much heat.

Mark Papermaster: That is an unending quest. That's what technologists like myself have to do. We have no choice. We have to stay focused there every single day.

Michael Krigsman: Mark Papermaster, Chief Technology Officer and Executive Vice President at AMD. Thank you so much for taking time to be with us. I'm very grateful to you, Mark.

Mark Papermaster: My pleasure, Michael. Thank you.

Michael Krigsman: Thank you to everybody who watched and for your great questions. Before you go, go to cxotalk.com. Subscribe to our newsletter. We have incredible shows coming up, so check them out. We'll see you again next time. Thanks so much.

Published Date: Jun 20, 2025

Author: Michael Krigsman

Episode ID: 884