How AI-Driven Disinformation Works (and How to Stop It)

AI is fueling a new wave of disinformation. Learn how fabricated news sites and AI-generated content are being used to spread misinformation at scale.

58:38

Jun 28, 2024
12,308 Views

In Episode 844 CXOTalk, we explore the growing landscape of disinformation and its impact on businesses and society. Our guests are Patrick Warren and Darren Linvill, co-directors of the Media Forensics Hub at Clemson University and leading researchers on how disinformation spreads, especially on social media. They discuss the latest tactics that bad actors use, from state-sponsored campaigns to individual influencers, and how artificial intelligence is changing the game.

Warren and Linvill provide insights into the structure of disinformation campaigns, the challenges of detection, and potential technological solutions on the horizon. They also consider the business implications, as companies can become targets or suffer collateral damage.

By better detecting and understanding these threats' origins, business, and structure, technology leaders can prepare and protect their organizations from AI-driven disinformation and instead use AI’s benefits to keep their organizations safer.

Episode Highlights

Increase defenses against narrative laundering

  • Understand how disinformation campaigns layer narratives through multiple sources to increase the credibility of falsehoods
  • Develop systems to track and connect disparate content that may be part of coordinated campaigns

Leverage AI for both offense and defense

  • Recognize that AI can be used to create and spread disinformation at a large scale rapidly
  • Invest in AI-powered detection and mitigation tools to be able to keep up with evolving threats

Rethink digital media literacy education

  • Help business users learn how to evaluate information sources
  • Develop more nuanced approaches that do not inadvertently fuel conspiracy thinking

Prepare for disinformation threats to businesses

  • Recognize that companies can be targets or collateral damage in disinformation campaigns\
  • Develop strategies to monitor and promptly address reputational threats arising from coordinated attacks

Balance authentication and privacy in online spaces

  • Explore technology solutions like blockchain to enable partial verification without compromising user privacy
  • Consider tiered authentication options that allow users to prove specific attributes selectively

Key Takeaways

1. AI Accelerates Disinformation Creation and Spread

AI tools have significantly increased the speed and scale at which disinformation can be created and spread. Malicious individuals can quickly generate large volumes of false content, translate it into multiple languages, and distribute it across numerous fake websites and social media accounts. This poses a challenge for human moderators and fact-checkers to effectively handle the influx of AI-generated misinformation.

2. Narrative Laundering Obscures Disinformation Origins

Disinformation campaigns often use "narrative laundering" techniques to obscure the true source of false information. This involves planting a story in obscure outlets and then amplifying it through diverse sources that are increasingly mainstream. When a narrative reaches major platforms, its dubious origins are hidden, increasing its perceived credibility and impact.

3. Businesses Face Growing Disinformation Threats

Companies are increasingly becoming targets of collateral damage in disinformation campaigns. Competitors or motivated actors may spread false narratives to damage a brand's reputation or manipulate market perceptions. Even unrelated political disinformation can affect businesses, as seen when NBA-China relations were strained by social media attacks following an executive's comments on Hong Kong.

Episode Participants

Patrick Warren is an associate professor of economics who has been at Clemson since 2008. Before coming to Clemson, he studied at MIT, earning a Ph.D. in economics (2008), and an undergraduate degree from the University of South Carolina Honors College (BArSc, 2001). His research investigates the operation of organizations in the economy — for-profit and non-profit firms, bureaucracies, political parties and even armies. He has written numerous peer-reviewed articles in top economics and law journals and currently serves as an associate editor of the Public Finance Review. He has served on the board of the Society for Institutional and Organizational Economics, been a visiting associate professor at Northwestern University and a visiting scholar at the RAND Corporation.

Darren Linvill is an associate professor of communication whose research explores social media disinformation and its influence on civil discourse (in and out of the classroom). He became a faculty member at Clemson after earning degrees from Wake Forest and Clemson and started studying social media in 2010. After becoming an associate professor in 2017, he delved deeper into the truth or falsity of online messaging and its effects. As a sought-after media expert, he’s contributed to many articles and broadcasts by outlets such as the New York Times, the Wall Street Journal, the Washington Post, Bloomberg, Rolling Stone, Inside Higher Ed, The State, CNN, NPR, ABC, NBC, WFAE and others.

Michael Krigsman is a globally recognized analyst, strategic advisor, and industry commentator, known for his deep expertise in the fields of digital transformation, innovation, and leadership. He has presented at industry events around the world and written extensively on the reasons for IT failures. His work has been referenced in the media over 1,000 times and in more than 50 books and journal articles; his commentary on technology trends and business strategy reaches a global audience.

Transcript

Darren Linvill: In its simplest form, creating fake troll accounts to take on the persona of real Americans, insert those accounts into particular communities, and then try to promote particular attitudes, beliefs, and worldviews within those communities.

Patrick Warren: I'll say there was some historical precedent for this sort of narrative laundering way back in the Cold War. I mean, the story about AIDS being invented in a US lab was a story that was spread in nearly this exact same way.

Michael Krigsman: Welcome to CXOTalk, Episode 844, where we discuss leadership, AI, and the digital economy. I'm Michael Krigsman, and today we're exploring how AI enables disinformation at scale.

Our guests are two of the world's top researchers on disinformation and how it spreads, especially on social media. Patrick Warren and Darren Linvill are co-directors of the Media Forensics Hub at Clemson University. Their work has been featured in many major outlets, including the Rachel Maddow Show, where I learned about them. Their most recent work was just published in The New York Times back in 2020.

Patrick Warren: I had kind of a major pivot in my research career when I, with Darren, together, opened the Media Forensics Hub, which basically studies lying on the Internet in all its forms. We've been interdisciplinary from the start because we feel like this problem is an interdisciplinary problem that just sort of one viewpoint on its own misses half the picture every time.

Michael Krigsman: Darren, when we talk about disinformation, what is it? What do we mean by that?

Darren Linvill: A simple definition is that it is deception purposefully spread for a specific end. As Patrick said, we narrow that down to lying on the Internet. In our work, if someone is being deceptive, we want to understand the methods that they're using, the tactics, the techniques, the procedures. We, especially, Patrick and I specifically, have spent a lot of time looking at the work of state actors, folks like the Chinese, the Russians, the Venezuelans, and their work to inauthentically influence conversations in the West particularly.

Michael Krigsman: What does that actually mean, inauthentically influence conversations?

Darren Linvill: It can take a lot of different forms, that inauthenticity and the particular forms of manipulation that take place, and those forms depend on the actor and the goals. So I think that most Americans, at least when they think of disinformation, they think of what the Russians have done in the past, what they did in 2016 and again in 2020, which was, in its simplest form, creating fake troll accounts to take on the persona of real Americans, insert those accounts into particular communities, and then try to promote particular attitudes, beliefs, and worldviews within those communities. In particular, they try to make communities on both the left and right more extreme than they would have been otherwise.

And that's, I think, Russian disinformation at its core and historically, is to promote particular ideas, to sow falsehoods, and influence people in sort of the traditional form of what we think of when we say persuasion. Whereas that's not all disinformation. It's not necessarily, I don't even know if it's the majority of disinformation anymore, especially if you look at what the Chinese are doing more recently.

The Chinese, as opposed to promoting ideas, they try to—now, I'm speaking in generalities here. There's lots of exceptions to this—the PRC does a lot of different things, but what they seem to be best at is demoting ideas, trying to make sure that there are particular things that we aren't talking about, and that takes on forms that look and use tactics and procedures that are entirely different than what Russia does.

So instead of creating artisanally made accounts that engage in particular communities, they will just, you know, send 1000 bot accounts at a conversation. If there's a hashtag that China doesn't like, they'll flood that hashtag out. If there's a particular individual whose voice they want to silence, they'll harass that person with fake accounts.

This is the research that we were part of just yesterday in The New York Times. There was a member of the Chinese diaspora, a dissident, living in the Northeast, and they sent thousands of accounts to attack this individual and frighteningly, his daughter, with horrible threats of violence and sexual harassment, his teenage daughter.

And this really affected me. Rarely have I been more angry than I was when we first identified this case. I have teenage daughters. Patrick has teenage daughters. So we took this personally, but that's the kind of thing that China does. And that doesn't take a well-crafted message. It doesn't take artisanally created accounts. It just takes thousands of bots and a little bit of time and a whole lot of distaste.

Michael Krigsman: So is the issue here primarily one of intent, and then the capability to scale and then distribute the message?

Darren Linvill: Scaling is pretty easy in this day and age. I know we're going to talk a lot about AI today, and that's, frankly, one of the things that AI is best at is scaling messaging. But even well before AI, it's always been easy to create new social media accounts.

You know, both China, Russia, Venezuela, they'll create thousands of accounts. And if a large percentage of these accounts are suspended by a platform, it's virtually zero cost because it's so easy and inexpensive to create social media accounts. I mean, that's fundamental to the platforms. They want it to be easy to create accounts because they want new users all the time. And that fact is used by bad actors to scale these operations quickly and easily.

Patrick Warren: I think there's a lot. I think there's too much attention on accounts in general. Darren and I have really changed the way we think about disinformation quite a bit over the past several years to really think more about what we call objects of influence. There's some real account or website or hashtag or narrative that these bad actors are trying to promote or demote.

And exactly how they go about it is a tactical question. Sometimes it's accounts, sometimes it's not a lot of accounts. It's just a handful of accounts that are deeply invested in. The exact approach to affect these objects of influence is going to differ by the goal and the capabilities of the actor and technology.

But the final impact, I think, is best thought of as promoting or demoting elements of the conversation in ways that affect what we're talking about and how we're talking about it in ways that are, you know, impactful.

Darren Linvill: I would add to that it's, you know, it's not always about accounts because often this is done off of social media entirely. One campaign we've identified recently that uses a lot of AI to underpin it is the fact that Russians have been using fake website or fake news pages that they've created to look like real western news pages with names like DC Weekly or the Austin Crier.

And they'll populate these news pages with content stolen from other genuine websites and rewritten using AI so that they look authentic. And then they'll insert their narratives into these fake news pages and use that to layer the message into social media through real users oftentimes, and distribute the message in that way.

You know, a lot of these techniques, you know, they utilize social media to distribute the message, but they're not necessarily reliant on any fake accounts at all but fake pages.

Michael Krigsman: Do you want to take us through the anatomy of one of these kinds of disinformation campaigns? And let me just also remind everybody that you can ask your questions of our guests right now. There's a tweet chat taking place, of course, on Twitter using the hashtag #cxotalk. Just pop your question into the chat. If you're watching on LinkedIn, just pop your question into the LinkedIn chat and we will answer.

So, take us through the anatomy of any one of these kinds of campaigns to make it really concrete how this works.

Darren Linvill: This campaign that we've identified that is coming from Russia, from what seems to be former elements of Evgeny Prigozhin's Internet Research Agency, which has now, you know, changed its name and morphed into other things. But what they do is, and we've identified 36 different narratives that they've distributed in basically the same way that they're...

Patrick Warren: Trying to promote, to go back to that promotion idea, that there's some narratives they're trying to make more prominent in the conversation than they would organically be.

Darren Linvill: Yeah. And most of these narratives are about, you won't be surprised to hear, Ukraine. They're trying to make us less trustful of Ukraine and less supportive of the war in Ukraine. Obviously, that would be to Russia's benefit.

 So what they'll do is they'll take a story, and they'll place that story first on YouTube, and it'll be an interview, perhaps, with an individual, with a story to tell some insider perspective.

The first narrative we identified was an individual who was an intern at Cartier, or so she claimed. And she told a story about Olena Zelenska, President Zelensky's wife, coming to New York and purchasing $1.1 million in Cartier. And then she got angry at this intern, insisted that the intern be fired. And on the way out the door, the intern took the receipt for that $1.1 million and made sure that you saw it on the screen. And this is the story that this intern told in a YouTube video.

This video appears on an account with no followers, brand new account, never been seen before. But that story then is told in narrative form in an article that appears in non-western media. In this case, it appeared in media in West Africa on pages that were overtly paid promotion pages.

So someone, likely in Russia, paid this West African outlet money, and they put this article onto their website, and it appeared legitimate, especially if you came to it via social media, because you had no way of knowing unless you came to it through the website that it was a promoted content page.

They then take that West African story and spin it into one of their fake websites that are intended for a western audience. Specifically, in this case, they told the story on DC Weekly, which is obviously meant for an American audience.

And then finally, after layering it, taking it farther and farther away from its original Russian origins, they disseminated online through a few fake accounts, but mostly genuine influencers, some of whom may have been paid to post the content some of whom had connections to Russia in one way or another. And then it spread to tens of thousands of users and into similar narratives spread by this campaign actually made it into the mouths of senators while discussing support or lack of support for Ukraine.

And the most interesting final touch to the story is journalists in Italy were actually able to identify the likely actor in the video who was pretending to be the intern. And she was in Russia. She was actually a beautician in St. Petersburg, the former home of the Internet Research Agency.

Michael Krigsman: What enables this to be, to take place to the point where US senators are now quoting this false news, literally false news narrative? Is it our gullibility? Is it that it's convenient for us to, in some situations, to align this because it suits our ends? Is it the great technology and great skill that they have? So what's actually going on?

Darren Linvill: Oftentimes, when you spin a story that somebody wants to believe, they're going to believe it, and that's what good disinformation actors know, fundamentally, they're not necessarily trying to persuade new audiences so much as entrench existing audiences and make those audiences have more extreme opinions than they would have otherwise.

Patrick Warren: I'll say there was some historical precedent for this sort of narrative laundering way back in the Cold War. I mean, this story about AIDS being invented in a US lab was a story that was spread in nearly this exact same way, starting with a story placed in a, you know, I think it was an Indiana newspaper, and then laundered through some sort of questionable Eastern European newspapers before appearing in the pages of, I forget where it got covered in the western press.

But this story, I mean, this pattern of narrative laundering is an old one, but I do think that technology has enabled it. Various technologies, I'm happy to talk through what those technologies are, have enabled this process to be sped up, to be broadened. So instead of one or two narratives, we can do 38 in six months. We can speed up the rate at which it moves from the initial placement to the popular discussion from years to weeks.

And I think that there are various technologies, some of which are AI technologies, that enable that.

Darren Linvill: So this idea that Patrick's pointing out about Russia's history in what are called active measures, promoting ideas, spreading what's called the fire hose of falsehoods, so that people don't know what to believe, there's a long history of Russia doing this. And I was talking about the distinctions between Russian and Chinese disinformation earlier.

The same is true with China's approach. There's a long history of China, especially with their own people demoting ideas, making sure that people aren't talking about a particular thing. I mean, go to China and try to have a conversation about Tiananmen Square. It's not going to happen.

So there's different approaches to disinformation are very much rooted in the history of these countries and how they engage in communication and the media ecosystem.

Michael Krigsman: Please subscribe to our YouTube channel. Subscribe to our newsletter.

We have some questions that are coming in, and so why don't we turn to some these excellent questions from the audience?

And the first one is from Anthony Scriffignano. He's a prominent data scientist. He's been a guest on CXOTalk a number of times. And actually, he and I have collaborated on some different projects together. And Anthony Scriffignano says this: “You talk about disinformation in the context of lying on the Internet, as well as TTP, that influence context or opinion. Can you elaborate on novel forms of disinformation, such as Gen AI being used to create narratives that seem to be written by a target but isn't?”

Patrick Warren: One of the most overwhelming cases of impersonation that we've seen in our work has actually been one of these flooding campaigns that involve the creation of hundreds and hundreds of fake accounts that didn't, the purpose of which was not to send content at all, but rather just to, to make it difficult to find the real account of the person that they were trying to flood out.

And so this involved AI only in a very, in a very narrow way, in that the content that these accounts were releasing in order for them not to be completely duplicative, were rewritten using what was probably ChatGPT to do the rewriting, just to, I think, avoid detection.

And so rather than flooding the, this target with hundreds and hundreds of accounts that were all saying verbatim the same thing, instead they were making it harder to detect the fact that they were getting flooded by, by using AI to rewrite those texts.

Now that's a very, you know, this is not, they were not trying to pretend to use the voice of this targeted individual. Rather, this was basically like chaff to avoid detection by the platform's detection algorithm. So this was basically AI against AI. I suspect I can't cite a particular case of AI being used to duplicate the voice of a target.

There was the Joe Biden audio case during the primary in New Hampshire. I don't know. It's not a case that we've studied in detail, but that's one that comes to mind in the US context. There was also a case in Eastern Europe.

Darren Linvill: The details somewhere in the Balkans. I remember that. Yep. I would say, though, to answer this question more generally, I am not yet as concerned about Gen AI creating fake versions of the president of one country or another as I am. Cheap fakes, really, at this stage, I think AI right now is really good at creating lots of relatively low-quality content, but the sort of high-quality fake people, it's not quite there yet.

Whereas a cheap fake of just a creatively taped together video. Let's say you wanted to make either candidate right now look old and doddering. It's easy to snip together some tape where you see them wandering off, but fail to show that they're going to shake somebody's hand or something.

And those are plenty viral and plenty influential without lots of technology underpinning them, without being wholly false. And those have lots of advantages, too, relative to a video that is created from whole cloth. They're cheap, first of all, easy to make, just a little creative editing.

But also, a lot of these videos aren't necessarily contrary to the platform's code of conduct. You can spread these without fear of them being deleted, or without fear of your account facing any consequences. What's a platform going to say if you show only half of a video? Oh, you have to finish that video or we're going to deplatform you? No, I think that's a difficult argument for the platforms to make.

But cheap fakes also fundamentally play on the same psychology that I was talking about before. They give people something that they want to believe. And even just a short, cheap fake is going to reinforce somebody's existing beliefs and entrench those beliefs in the same way that a deepfake would.

Michael Krigsman: We have a question from BigScootEnergy, and he says: “How does AI-driven misinformation impact businesses and corporations?”

Patrick Warren: We've been talking here mostly in the political domain, but there's nothing to say that an object of influence needs to be a political object. Companies can easily be targeted.

We've worked with several companies that believe that they have been targeted in this way, and some of them had in fact been targeted in this way where some competitor or some motivated actor wanted to promote or demote a narrative associated with that brand.

One of our actually new hires in the hub. His whole research agenda is all about social media firestorms and the way they can happen inauthentically. I think there are huge implications for this work in the business context. And I think big corporations have come to recognize that, but maybe not quite as much as they should.

Darren Linvill: It's very possible, and we've seen it happen many times, that corporations can be sort of collateral damage to conflict between states as well. In the same campaign that we were talking about from Russia with the fake websites and disseminating of particular narratives, we saw a pharmaceutical company that was targeted by a narrative suggesting that that pharmaceutical company was testing in Ukraine, and as a result, children were dying.

Well, you know, the real target of that campaign was Ukraine, and they were using pharmaceuticals as a way to connect with a particular audience that was, you know, willing to engage in narratives with, you know, a particular audience that may have had doubt about vaccines in particular, but that company was still potentially affected by that sort of attack.

And we've seen this sort of thing happen many times. We did some work a few years ago looking at how the NBA was affected to the tune of hundreds of millions of dollars by conflict with China when Daryl Morey, the former general manager of the Houston Rockets, said something positive about democracy in Hong Kong, and he was then attacked by thousands and thousands of inauthentic accounts across platforms.

And they did that to make sure that the NBA, or one reason they did it, was to make sure that the NBA understood the potential viewpoint of the Chinese state and the Chinese people, who, from the NBA's perspective, is a market.

Patrick Warren: This isn't just brands for consumers, either. Potential employees. We were involved with a case involving, I think, someone trying to spread some false claims about our own university, maybe to affect recruiting applications of this idea of using increasingly inexpensive technology in order to affect the prominence of ideas on social media has huge business implications, cases huge.

Michael Krigsman: This is from Greg Walters, and he has a bunch of questions. So let's kind of group these together. And he says: “Isn't the best way to defeat propaganda, even at scale, to enhance people's critical thinking?” He goes on: “Aren't you demoting, instead of recognizing, individuals' critical thinking skills, and at what point does the message overwhelm the intelligence of the consumer? And on the other side of the issue, can it not be said that by recommending the restriction of misinformation, we are on the verge of censorship?”

So he's basically accusing you of not recognizing individuals' critical thinking skills and essentially wanting to know what to do because.

Darren Linvill: You're recommending censorship when it comes to critical thinking skills. This is historically the way that media literacy and digital media literacy lessons have been taught for many, many years in the educational system. And to a degree, are these skills important? Of course, they are, but I think they have side effects because, you know, we've basically, for the past generation of students, been teaching them to doubt, to question everything that they see and read in the media.

And now I know with my students, they've taken that to heart in ways that I think, you know, many older Americans might find concerning, you know, they don't think that they're being lied to. They assume that they're being lied to, which is sometimes true, but it's not always true. And I think where my students today have trouble is understanding when and how to trust information.

And I think that's sort of the opposite of issue that we may have historically. And so when it comes to critical thinking, I have mixed feelings. It's something we need to teach. Obviously, it's important in all kinds of contexts, but I think that it can also have consequences.

So take, for instance, the QAnon conspiracy. These were entire communities of people that gathered together, that did research together, that engaged in an exploration of ideas. And I'll tell you what they were doing. They were thinking critically, and in the process, they spread a lot of potentially dangerous ideas.

Patrick Warren: We could find a perfect set of educational interventions to not just make people doubters but improve the quality of their thinking and therefore, the quality of the conclusions they draw. That still wouldn't solve many of the problems of disinformation that we've already talked about. So several of the targets of disinformation are not your mind directly.

It's not about affecting what judgments you draw, but rather affecting what algorithms deliver to you, or whether you can find the account amongst the needle, account of the true person amongst the pile of hay of fake accounts. There are many, many sorts of disinformation that just being a more discerning reader would offer no defense to.

Let’s take Amazon. Let's take Amazon reviews. So Amazon spends a lot of money trying to improve the quality of the reviews on their websites. They try to make sure that people are not producing fake reviews because that makes the customer's experience bad. But no individual consumer has the information or the time, even if they had the information, to exercise the judgment very effectively, certainly not as effectively as Amazon may be able to, about whether one given review is real or not.

We need systematic approaches to solve systematic problems. One of those approaches is investment, for sure, investment in media literacy, but also investment in systematic ways of labeling or providing context to the messages that you see.

I'm actually, I'm about as close to a free speech extremist as there is. But that's different than providing no context to the information that you see. I think that platforms have a business case often to provide that context, and that we as a society have a policy case for helping them provide that context.

Michael Krigsman: I just want to remind everybody that this would be an excellent time to subscribe to the CXOTalk newsletter so we can keep you up to date with our shows. So subscribe to the CXOTalk newsletter on our site. Do that now. And be sure also to subscribe to our YouTube channel.

Alright, we have a question from Arsalan Khan. Arsalan is a regular listener. He asks great questions, and he says: “How can normal people who are not journalists or technologists know when information is disinformation? And how can you challenge disinformation from authoritative figures?”

Darren Linvill: The short answer is, I'm not sure you can, Arsalan, but what I think is important is how we engage in the digital world, how we engage with the potential of disinformation, knowing that it is a reality. I mean, because it's true that, you know, most things aren't disinformation.

Most accounts are who they purport to be. I can't tell you how many times I get messages from journalists or real people and they say, hey, is this account a Russian troll? And, you know, it's never a Russian troll. It's usually just a jerk. But I think the knowledge of disinformation and understanding it should still inform how we engage in the real world.

So when I'm out in the real world, I don't let somebody who's a stranger on the sidewalk borrow my phone and have access to all my contacts just because I like their t-shirt. If I don't let somebody into my home, just because they knock on my door and say they like the same candidate that I like.

But in the digital world, we do that all the time. And I think that has the potential to open us up to all kinds of negative consequences, both disinformation and, perhaps more common and more important, fraud.

And so I think it just takes more education, more understanding, and a sort of change of mindset.

Michael Krigsman: This is again from Anthony Scriffignano, who says: “Regarding the point you made earlier on the lack of governance on creating new accounts and fake accounts on social media, can you touch on how we can use voluntary multifactor authentication and other techniques to do a better job of adjudicating source veracity to being sure that these accounts are real, or distinguishing between the real and the fake?

Patrick Warren: There are important roles for AI, both in the offensive and defensive capacities, in sort of battle around disinformation. One of the most important roles on the defensive side is the use by platforms and really other entities; it doesn't just have to be platforms.

Other people could also provide third-party, maybe not verification, but at least context for accounts. I don't think it necessarily needs to be tied to the platform, but there's a tension here. And that tension, I think, technology may offer a slight release to.

So for a long time, there's been this tension between privacy and authentication, where I may pretend to be a single mother from Baltimore, and that's a way that the Russians could insert themselves into certain groups and conversations. It's very difficult for me to verify whether that's true or not.

But on the other hand, if the sort of proof that would be required to verify that would really open lots of people's, say, dissidents, we were just talking about dissidents, dissidents all open up them to threats from, say, authoritarian regimes or just fraudsters or all kinds of things. I do think there's a technological way to maybe cut that knot a little where I may not be able to verify exactly who I am at the level of that would be, say, reveal privacy-protected character, individual elements about me, but yet still reveal some class specific characteristics about me.

So what do I mean by that? Let me make it very clear. So maybe I would like to contribute to a conversation, and I'm happy to reveal the fact that I have an email address at a major university in the United States. That's something that is in principle verifiable, without verifying who I am in any way.

These sorts of intermediary levels of verification, especially if they're done, say, through the blockchain, in ways that are not traceable by any individual back to me, might be able to break this tension between privacy and authentication.

But I have, there have been several, you know, attempts at doing this. I've not seen it adopted well in practice yet, but that doesn't mean that, you know, I hadn't seen large language models and people interacting with them adopted in practice three years ago, and now every student that walks in the door has a lot of experience with that.

So I'm not, just because it hasn't happened yet, I don't think means it's not going to happen.

Michael Krigsman: And this is again from Big Scoot Energy. He says: “Once you've identified a piece of disinformation, how difficult is it to stop the spread of that disinformation? What is the process of removing a piece of AI-generated disinformation from circulation?”

Darren Linvill: The short answer is that it's never easy. It depends on the platform. So once upon a time, we had good relations with Twitter, and Twitter had a robust moderation staff with connections at organizations like our own and other organizations around the world, nonprofits, even state agencies.

And if there was something circulating on that platform, it could be moderated if it was spread in some verifiably false way and it was verifiably false information. Now, obviously, that's not the case. I don't have the email to anyone at Twitter anymore. If anybody is listening who's willing to share a Twitter email account, I would love to have it.

But at most platforms, there are contacts that you can get. We've engaged with most platforms to varying degrees, sometimes through journalists, sometimes through our own connections, to try to spur the moderation process. We're very strict in what we go to them with in terms of our confidence level of this being some kind of inauthentic networked activity spreading false information.

But it's a tricky process, and especially in this political environment, an evolving one.

Patrick Warren: Morgan Wack is a research professor in our lab, and he's done some work looking at fact-checking. You're not quite asking about fact-checking, but this relates. So what he finds is that on average, the span between the moment when a false claim is made and when it is fact-checked is like three or four times as many days as it takes for that false claim to get, was it 75% of its spread.

So speed is incredibly important, even in contexts where there's really no dispute at all about the veracity of a claim or the veracity of an account. It's just, and again, I'm going to go back to this like AI for good, AI for bad. It is impossible for humans to act fast enough to remove again accounts that would be verifiably false. If I showed you, I could convince 99 out of 100 people that this account is fake. Nevertheless, the damage is often done.

Michael Krigsman: Arsalan Khan comes back and he says: “Can't we just create something with AI and blockchain technologies to automatically challenge disinformation?” That's a really interesting question. Why don't we have the tools to simply say, hey, this is disinformation, and we're going to block it?

Darren Linvill: I know that it is the case with a lot of false accounts, especially bots, that it's variations on this process that they do mitigate the problem and moderate the accounts. I know from having worked with the platforms in the past, particularly the work of the PRC, a campaign known as Spamouflage, they'll create thousands of accounts a day, and many of these accounts may never even post because they'll be challenged very quickly.

But it goes back to this issue of scale. It doesn't matter if you've challenged and suspended half of the accounts because they're so cheap and easy to make. The other half, all they need to do is post a few times, and most of them are eventually challenged and suspended.

But once they've posted a few times, the damage is already done. They've already had their effect on the platform and on the algorithm and on their target individuals. And so moderation doesn't always solve the problem if it can't be just instantaneous.

Patrick Warren: Let me add something else, and that is that bad actors will, you know, kind of like the AIDS virus. They'll use the defenses against the platform to some extent.

So let's say that I had a, I was talking about promoting a demoting narrative. So let's say that I had a plan. I really wanted to demote some narrative. There were some things that people were talking about that I didn't want people talking about at all. And let's imagine that there are other campaigns that want to promote narratives. They want to take things that no one's really talking about and get people to talk about them more. And I have created a system where I detect promotion.

So I'm looking for inauthentic and promotion of narratives because I don't want people to be able to bring their talking points to the top of the platform inauthentically. And I've set up a system to try to detect this and mitigate that impact or even not allow that sort of false trending of, say, hashtags.

Well, then, if I'm an actor, that is goal is not to promote, but rather to demote, then I can take advantage of that system. I will make a network of accounts. That's actually very easy to notice. I will point it at the hashtag that I am trying to demote, and then I will use the platform system of detecting inauthentic promotion against it by basically triggering the response to get that hashtag shut down or not allowed to trend or what have you.

We've actually seen this in practice multiple times across multiple platforms where targeted people or targeted narratives were explicitly labeled as sort of. I think the phrase was something on Pinterest. Once we saw this phrase, it was what? This topic violates community guidelines. So they basically removed the topic entirely from Pinterest because it was the target of a campaign. This is exactly an example of using their defensive methodologies against them.

Darren Linvill: It may be possible that many platforms can't walk and chew gum at the same time.

Michael Krigsman: It's also pretty obvious that these platforms are doing an incredibly lousy job of it because the various campaigns you have described that were so successful, they're working, right? So what do we do to fight this?

Darren Linvill: That's becoming increasingly difficult given the range of tactics the bad actors are employing. So this campaign that Patrick was mentioning earlier in our conversation from Rwanda, that campaign was harnessing real people's accounts that they'd borrowed and using AI to add content to those real human accounts.

That's clearly a violation of the platform's guidelines, but very hard to detect because these accounts all look very different from one another. They're posting other content on the weekends, but just this disinformation periodically in a coordinated, networked way.

And that's just one of many of examples. Venezuela has a totally different approach to harnessing real people where they go and they sign up for, for an app and then post disinformation through that app, but again, using their real human accounts.

And we've seen this as well with Russia not in the same networked way, but relying on real people to spread their disinformation, whether those real people are influencers that happen to work for RT or Sputnik or influencers that, you know, it's easy to send somebody a few dollars over the Internet now, and they may be just willing to pay. It's far cheaper for Russia to engage in this sort of activity than trying to run a whole troll farm where they're having to create their accounts from whole cloth.

So, you know, as frustrated as I am with the platforms sometime, they do have their work cut out for them as things evolve very, very quickly.

Michael Krigsman: How do we combat this?

Darren Linvill: I think it takes a whole of society approach. I think that we can look to countries that have, by and large, been successful in combating disinformation, countries like Finland. Finland is literally on the doorstep of Russia. They've been dealing with these issues far longer than we have and have sort of existential reasons to take them more seriously than we do.

Is there disinformation in Finland? Without question. We've seen some of it, but they have taken a whole of society approach. Understanding of disinformation is integrated into their educational system from a very young age in an apolitical way. They work with the platforms, their politicians understand the problem and the threat that it poses, I think, in better ways than we do here in the states.

And I think it just like I said, it fundamentally takes a sort of whole of society, rather than the piecemeal way that we're doing it.

Michael Krigsman: Are there technology solutions? Many of these campaigns are using AI, as you've been describing, to scale their work, to automatically create millions of accounts, to use ChatGPT, to create lots of content. Are there technology things that we should be doing that we're not?

Patrick Warren: I think it's maybe useful to step through kind of the ways that we think AI might be, or not just might be, can demonstrably has been making the jobs of disinformation purveyors easier. And then let's think carefully about how it might help in fighting those weak points.

So I think naturally, one of them is speed. So I mentioned how quickly things spread on social media. Well, if I can quickly create many things and sort of stay ahead of opportunities to detect me, that's a big problem. So I can do 38 fake narratives, and I can translate them quickly into English and spread them, you know, in many, many different fake websites all at the same time. That's a problem.

So how do we like if this speed element of AI is a big sort of force multiplier, it can also be a big force multiplier for the defense. Right? So it is true that I can use AI to rewrite the same text in lots of different ways and therefore hide the fact that it's all been produced by the same individual. But then I can also use AI to read all of those writings, distill them down to their core, and essentially reconnect them to each other.

So in the past, if the IRA hired or China hired thousands of people to rewrite messages, they provided them with the same basic message, but they asked humans to rewrite them. I would have had a very difficult time seeing that, but now I can use an LLM to basically undo that rewriting and notice that these things, messages are essentially the same, even though they are, you know, if you were to read the words very different from one another, that's a. That's basically using the exact same technology on offense and defense. I think that that's just an illustration.

So one of the things is speed, one of the things is targeting. So another thing that AI lets you do is that I can send Darren a different message than I send to you in order to push the same narratives, but in ways that would be particularly attractive to Darren versus particularly attractive to you.

That sort of very narrow targeting is something that AI can help with, but again, it can help with defense, because I can recognize that these are all variants with very different targeting, and I can learn not only about what message you're trying to push, but a little bit something about the way the tactics you're employing are driven by your model of how people respond to things.

So, let me give you an example of this. So, the DC weekly campaign that you. We spoke about a couple times, they made some mistakes in putting this campaign together. And one of the mistakes is that this is maybe a drawback of scale, which is that you have to proofread a lot. And if you don't proofread a lot, then every once in a while, say, 1% of the time, or even a half a percent of the time, uh, you don't proofread something, you should, and the prompt leaks through.

This is something that AI is known to do every once in a while, which is like, reply back with your prompt in addition to the answer to your prompt. And so we were able to actually observe the sort of targeting they had in mind. Rewrite this, but in such a way to make pharmaceutical companies look bad, just as an example.

So, that told us not only the sort of stories they're trying to rewrite, but the sort of spin that they're making on it. Now, in this case, I could see the prompt that made it easy, but AI could have solved that problem for me itself. Like, read all these stories and tell me the spin that's coming into these stories. That is the thing that we can do at scale.

Now, I would have taken hundreds of human hours in the. In the old days, and so it feels to me like for every advantage that AI offers the offensive player, there is a mirrored advantage that it offers the defensive player. Now, we need to invest. They have strong incentives. They have strong incentives to invest in this sort of offensive technology. And how to apply it in ways that are effective, we western society, we as a nation, need to make similar defensive investments.

There's been some fights about NSF funding for projects that study disinformation. I understand the origin of those fights, but let's not let partisan differences get in the way of a true fact, which is that we need to invest in defensive technology here. And it's clear that there are returns to that investment, but it needs to be made.

Michael Krigsman: And of course, the major cybersecurity vendors we work at, CXOTalk with, some of them are heavily, heavily, heavily using AI to protect against cybersecurity. But these are mostly technology threats.

We have a few more questions that have just come in from Twitter. I'll ask you to answer these very, very quickly because we're just about out of time. From Lisbeth Shaw: “What trends are you seeing with respect to AI-driven disinformation?”

Patrick Warren: One trend that I'll mention just quickly is its diffusion across actors. So, early on, you would see it essentially only by state actors. But now, I think every disinformation actor I've bumped into, I mean, small-time sort of fraudsters, still state-backed actors, everybody is using AI. And specifically, everybody's using arms.

Darren Linvill: We're using it.

Patrick Warren: Everybody's using LLMs. I mean, everybody. And it's just, you know, students. Yeah, my students. Right. To me, that's the number one trend, is that it has become off the shelf for anyone.

Michael Krigsman: Again, from Arsalan Khan: “We all have our own information biases and filters. Who decides which information is information? Disinformation or misinformation? One man's information or one woman's information is another woman's disinformation.”

Darren Linvill: I think that's very true. I think we ultimately each decide. But I think that that doesn't mean that there's not important, important reasons for understanding each of these things, for understanding how each of them spread, and for understanding when it is appropriate to engage in various forms of possible moderation, whether that's labeling or labeling content or suspension of accounts.

Because while there are distinctions between each of these things, I think that at the end of the day, authenticity matters. Giving people genuine perspectives and genuine understanding of what people in the world around them believe, rather than what fake people in the world around them believe, is also understanding. So that we're all working from the same playbook.

Patrick Warren: Can I add one thing? I know that we're supposed to be fast, but this is really important. I think. I don't think anyone would ever criticize Amazon for their decision to remove reviews of products that they believe are fake? Like, that's a certain sort of decision about whether this is disinformation or not.

But like, Amazon's decision to remove certain reviews from products that they believe are fake is not a violation of anyone's civil rights. Like you got, we, these platforms appropriately moderate the content on their platforms. It is in their business interest to do that. To me, I think we need to recognize that.

Michael Krigsman: We have one final question from BigScootEnergy, and he's asking the question that I think is on everybody's mind. “There's a big event that's taking place later this year, in November,” and without mentioning any names, because if you do, Google's algorithm literally will suppress us, because there are algorithms when it comes to being able to distinguish genuine information, such as we provide here on CXOTalk, versus incendiary information is limited and very weak. So without mentioning any names, the big thing that's happening in November, what do you think is going to take place from a disinformation perspective?

Darren Linvill: I think I'm not going to get a lot of sleep, Michael. I think that's what's going to happen.

Michael Krigsman: So you anticipate a lot of activity?

Darren Linvill: Yeah, because I think there's a lot of bad actors with vested interests in the outcomes, and that often attracts disinformation and deception. I think that, honestly, I think we're going to see surprises. I think we're going to.

I've already seen so many new techniques, new methods employed just in the past year that have surprised me with each new event. And I think we're going to see things that I'd never thought of, and it's going to surprise me. At least my job will be interesting.

Patrick Warren: I want to reiterate this perished entry point. I think that one reason why I'm particularly nervous about the scale of disinformation in the fall is because it is basically turnkey for any motivated actor, even with very little technical capabilities. I think that we're going to see a lot of non-state actors engaged here in ways that even in 2022, we didn't see.

Darren Linvill: Yeah, I think that's true.

Michael Krigsman: The technology in general has become so easy, and LLMs are so widely available and so powerful. So the rise of script kiddies from cybersecurity and break-ins to migrating to disinformation seems like a very reasonable likelihood, as you were describing.

And with that, I want to say a huge thank you to Patrick Warren and Darren Linville, who are the co-directors of the media forensics hub at Clemson University. Gentlemen, thank you so much for sharing your expertise with us on this extremely important topic today.

Patrick Warren: Thank you for the invitation.

Darren Linvill: Thank you, Michael. It's great to be here.

Michael Krigsman: And everybody, thank you for watching, especially you folks who asked such excellent questions.

Before you go, please subscribe to our YouTube channel, subscribe to our newsletter. Check out CXOTalk.com because we have really great, great shows coming up.

Thanks so much, everybody. I hope you have a great day.

Published Date: Jun 28, 2024

Author: Michael Krigsman

Episode ID: 844