Using Data Science to Detect Disinformation

Disinformation presents numerous challenges to business and society. We explore strategies for using data science to uncover patterns and reveal disinformation campaigns.

42:21

Jul 26, 2019
5,216 Views

Disinformation presents numerous challenges to business and society. We explore strategies for using data science to uncover patterns and reveal disinformation campaigns. We hear so much about disinformation, but do we even know what disinformation is? What's the connection between disinformation and data? It has a profound impact on our society, our elections, this concept of fake news.

Brett Horvath serves as counselor to elected officials, policy makers, Fortune 100 CEO’s, military strategists, and leading scientists as they confront the rapidly evolving landscape of global information warfare. He has worked at the intersection of technology, politics, and risk for over 10 years, including managing come-from behind local elections, creating new organizing tech for national campaigns, designing the first comprehensive search tool to mine Twitter’s entire database in real-time, mapping the geo-politics of cyber-warfare, mapping the geo-politics of cyber-warfare, using machine learning, supercomputing, and non-linear system dynamics to comprehensively model climate risk, briefing dozens of national and international reporters on disinformation, creating new experimental models for tracking influence attacks, designing and organizing original scenario planning games for cohorts of public and private sector leaders, and advising governments around the globe as they confront emerging threats.

Dr. David A. Bray was named one of the top "24 Americans Who Are Changing the World" under 40 by Business Insider in 2016. He was also named a Young Global Leader by the World Economic Forum for 2016-2021. He also accepted a role of Co-Chair for an IEEE Committee focused on Artificial Intelligence, automated systems, and innovative policies globally for 2016-2017 and has been serving as a Visiting Executive In-Residence at Harvard University since 2015. He has also been named a Marshall Memorial Fellow for 2017-2018 and will travel to Europe to discuss Trans-Atlantic issues of common concern including exponential technologies and the global future ahead. Since 2017, he serves as Executive Director for the People-Centered Internet coalition co-founded by Vint Cerf, focused on providing support and expertise for community-focused projects that measurably improve people's lives using the internet. He also provides strategy and advises start-ups espousing human-centric principles to technology-enabled decision making in complex environments.

Transcript

This transcript has been lightly edited.

What is Guardians.ai?

Brett Horvath: Guardians.ai's mission is to protect pro-democracy groups around the world dealing with information warfare and what we call engineered volatility. What we mean by engineered volatility is efforts to interfere in open markets, elections, culture, and media. We've been doing this quietly for about three years and really run into all sorts of interesting use cases around the public sector and the private sector. It's been an interesting ride.

What is disinformation?

Brett Horvath: I think disinformation is a great starting point. Fake news isn't really the right lens to look at it because disinformation can start with something that wasn't intentionally false but turns into something that confuses lots of people.

Dr. David Bray: When things are taken out of context, that is the polarizing social wedge that we face and the challenges that we face in addressing this. What Guardians.ai has been doing is they've been tracking the data behind this as a way for the data … (indiscernible, 00:02:12) in terms of what they're looking for and what they're trying to address.

Michael Krigsman: When we talk about disinformation, David, give us some examples.

Dr. David Bray: Sure. In Europe, we've seen the Yellow Vest Phenomenon as an example where that was something that may have been initially produced virally. But really what made it sort of go accelerate was when a combination of automated accounts and intentional actors tried to inflame both sides of the Yellow Vest Phenomenon debate. They took pictures out of context, text out of context, and tried to … (indiscernible, 00:02:48). Brett, do you have additional thoughts about examples of where disinformation or taking information out of context has been weaponized?

Brett Horvath: Yeah. I think there are some key themes that come up. With disinformation, all things that are old are new again, efforts to really focus on racially divisive narratives, and spreading division amongst different groups of people online. We saw that a lot in 2015 and 2016. It's not just an American phenomenon. It happens all over Europe in relation to elections.

We're seeing that happen a lot right now in the U.S. We saw groups of accounts on Twitter that were foreign-based and coordinated, pushing racially divisive narratives that started to increase in activity starting in February and March, but they're really starting to light up right now in a big way. We're trying to keep our eye on them and see, how far will they go; what groups are they targeting?

Racially divisive narratives, I think, is one of the most important places to keep your eye on. It's not just about politics. We're seeing these groups of accounts go after banks and tech companies, so it's interesting.

Dr. David Bray: Brett, a question: It used to be, back in 2012 or so, that you could fairly tell whether it was a bot or not simply because the bot never slept whereas humans had to sleep. As soon as that became sort of publicly known as a tell to detect bots, in 2013, then they started actually having the actual programming to have the automated account that's spreading tons of information to actually sleep for eight hours and then come back on. Could you talk a little bit about the data behind how you detect if this looks like this is something that's being intentionally done as disinformation or intentionally as a polarizing social wedge?

Brett Horvath: That's a great question. There are a lot of bot detector stories and studies that are trying to map very simple things like, are accounts retweeting each other or is it very obvious linguistic tics? That's just not sufficient anymore. Bad actors have evolved their tactics significantly and not only emulate humans but hide amongst real human users and get them engaged in their own patterns of coordination.

From our standpoint, you can't rely on automated solutions alone. We call it the fallacy of the one big net. Everyone wants to have one big net of data and then rely on magical, automated anomaly detection.

The problem is those are really narrow algorithms designed with a very narrow intent. Over time, they have what we call analytical debt. It's like buying a new car. It loses value as soon as you drive it off the lot.

Dr. David Bray: [Laughter]

Brett Horvath: Combining human-level analysis and ethnography with machine systems and automated detection and then building learning loops between them, from our standpoint, is the only real way to solve it. Definitely be skeptical of anyone promising one tech, one automated solution because they're probably not going to be able to keep up with the pace of innovation of Russia or China's entire information warfare infrastructure.

How do you scale the fight against disinformation?

Brett Horvath: Well, this really gets at the challenges and the limitations of top-down, command and control approaches to managing information warfare, either identification analysis or response. You can have the most well-funded, well run, top-down organization, but you're going to be overwhelmed with what is really nonlinear information warfare that are tactics that people haven't seen before.

That combination of human-level analysis and machine systems, we call those augmented intelligence workflows. The whole idea is, they're tech agnostic or they can be. They're extensible where you can partner with peers in your industry, a volunteer association, or an NGO. We've found that when it's the really nasty stuff, it's pretty rare that one organization can fight it alone.

These augmented intelligence workflows allow you to rapidly spin up new ways of identification and analysis with partners, either in a one-off basis or in a sustained manner. It's really trying to get serious about that whole collaboration thing and getting outside of the top-down hierarchy because, no matter how big, cool, and well-funded your organization is, you're probably not going to be able to stay up with the rate of innovation.

Michael Krigsman: David, I think maybe you can share some insight into the approaches.

Dr. David Bray: Sure. With the People-Centered Internet that I work with as Executive Director, we're working with Guardians.ai and other groups that are trying to do counter disinformation, counter misinformation. The techniques really are as Brett mentioned. There's no one single way to detect this. If people claim that or if they say it can simply be done by one simple solution, that's not the reality and that actually hasn't been the reality probably since 2013, 2014.

As Brett mentioned, they're getting much more advanced in terms of how they adapt and shift. I know that he's had cases where he identifies something as appearing to be automated. Then when someone goes to follow up, be it a journalist or someone else, on those accounts, all of a sudden, the accounts shift and they actually have real people behind those accounts for that time period. Then they switch back to some automated fashion.

It really is almost like being a detective of sorts, which is, you're looking for different tells. You're using the data to try and drive that. It's moving things up or down the probability curve that this is not a natural human phenomenon, whether it's in terms of just the sheer volume or it's in terms of patterns where certain accounts are only doing other accounts. It's actually got to be a multifactorial approach to look at this because it can't be done by one single bullet alone.

The reason why the People-Centered Internet cares about this is because we really want the Internet to be a source of hope for people much like how it was in the mid-'90s where it was intended to actually uplift lives, help reach greater understanding and greater truth, but now we see it dividing ourselves. Really, the reason why we find partnering with Guardians.ai and other groups so valuable is this is the needed almost public health, almost epidemiology-like approach, which is recognizing that this is all of us, including the public, and what we choose to post that contribute to this phenomenon.

Brett, we're actually wondering. Could you actually tell a little bit more about what you and your team at Guardians.ai did when you were examining the attempt to claim that there may or may not have been election interference with the midterms in 2018? Could you talk about how did you actually sort of proceed in that investigation?

Brett Horvath: We were looking for things that other people were not spotting. There was a lot of new kind of bot detector systems and research. What we found just by poking around, looking for strange anomalies, is a group of accounts that were coordinating around voter fraud narratives for over three years. They were co-spiking at bizarrely similar intervals that looked like an EKG machine for three years.

Then right around late summer, we realized that they started surging. They went from zero mentions in one day to all of a sudden getting 10,000 mentions. No one was detecting this because pretty much everyone's sensor networks were built around monitoring retweets and were bots retweeting each other. This slight shift in tactics from retweet for amplification to mentions meant that almost everyone missed what ended up being one of the largest influence attacks on the 2018 midterms.

How we were able to identify these things that weren't picked up by bot detectors, were coordinating in this different way, we had to spin up this augmented intelligence workflow with our company, a group of volunteer researchers, and then the San Diego Super Computing Center and two academic institutions to figure out what was going on, spin up different analytical loops, and then do it faster and faster. That was the only way we were able to find that. I don't think any one organization could have identified this influence attack.

That collaborative approach isn't just good for the whole of society and democracy. Everyone who was involved learned a lot that was valuable from a business standpoint, from an innovation and technique standpoint. But that was a pretty scary thing because, towards the end, those accounts were the biggest source pushing the caravan crisis and they were fusing this distrust of democracy with these racially charged narratives claiming that, essentially, millions of people coming in from the caravan crisis were going to vote illegally in close senate races.

That fusion of distrust in democracy and racially charged narratives, that's a common thing. We saw it in 2016. We saw it in 2018. Now we're seeing it again this year.

Is collaboration the solution to disinformation?

Dr. David Bray: Collaboration really is thinking about, as Brett showed, thinking about what was … (indiscernible, 00:12:28) as a polarizing social wedge with election interference in 2018 that no one organization spotted, it really is recognizing that you have to collaborate not just within your company but also with academia, as he mentioned. There are approaches to this, also with other research institutions, recognizing that there is a balance between involving the private sector and the public sector.

A lot of these polarizing social wedge efforts that are being done, whoever is actually doing it, are trying to actually inflame divides across different groups. Really, they're trying to paralyze us because we're so divided or we're seeing each other that we can't actually work together, we can't trust each other, that that is to them a win.

Oftentimes, it really is about coming forward and saying, "Look. We're trying to do the best we can in whatever situation this is." As he mentioned, initially, people hadn't even spotted these things that had been going on for three years. It was only when Guardians.ai looked at enough of the data to see that there were these very abnormal spikes that seemed to be happening at very odd, coincidental times. It wasn't necessarily a telltale signature, a fingerprint, or something forensic. It was just simply looking at the spikes and the signatures.

That, one, to me suggests that you need a "many eyes approach" that is looking for the nonobvious, not just the obvious, because if it's obvious, it may very well be whoever is doing the social wedge wants you to cue in on the obvious and be distracted by that. Instead, they want you to look for what are the nonobvious social wedges that are these spikes or coinciding time sequences and then actually, from there, begin to have a greater conversation about, what else are we seeing? What else could this be? What else is going on here?

It's just recognizing that a lot of these begin with identifying something that's not obvious, involving enough of the partners, and then drilling down into the data so that you can go further in. I think that's what makes Guardians.ai so important is it really is a team effort. What Brett is doing is really collaborative across different groups.

Michael Krigsman: Brett, tell us about that collaboration.

Brett Horvath: One of the things that I was getting to is that, inside of an organization, it's looking at this not just as a communications problem or a cybersecurity problem. The first act you can do that can be transformative is building the bridges between risk, cybersecurity, and communications. You get a similar taxonomy of what are the things we're seeing.

One of the first problems is, some smart person in one of those divisions is seeing something weird and they don't know what to call it, what to name it, and who to send the, "This is weird," email to. Establishing that shared taxonomy and then how you get that up to executive decision making is really important. If your organization is treating information anomalies and influence attacks as something kind of mid-level management and not rising to the top level, you're probably going to be missing some significant business risks or opportunities because these things are not isolated to one part of the business operation.

It's not just social media. What happens on social media doesn't stay there. Sometimes these social media influence attacks are designed to actually lead to phishing attacks to get into CTOs' or CEOs' accounts. If you don't have those lines of communication between risk and the coms team, you're going to be missing out.

How can we protect healthcare information?

Brett Horvath: I think there is a lot of specific, near term things. One item I'll say is that I think it's important to understand that, as people get really engaged and some might argue addicted to certain social media applications or Facebook groups that are designed to be targeting their amygdala, their sense of fear, like anti-vaxxer movements, that starts to have a physiological effect.

You think about a Russian active measure targeting a group of people, getting them to change their digital behavior to think about not vaccinating their kids. They don't vaccinate their kids. Then there are measurable, real-world health outcomes.

Thinking about that risk on a unified, biological, and information continuum I think is a starting point to both analyzing near term risks and then also thinking about what are the health effects. I think we need to really move this beyond politics and not discuss, are these accounts targeting left or right-leaning people. Therefore, is it good or bad? This is something, information warfare, disinformation, whatever you want to call it, it's a public health crisis that affects all of us. If we can start to measure these things scientifically from a health standpoint, I think that the health tech community and the public health sector could have a lot of important business innovation and public good that they could do in this space.

Dr. David Bray: I just want to emphasize what Brett just said. That's a very tangible example of how using people's anger and fear on social media, which is designed to have you come back and use it. Unfortunately, the platforms have designed themselves to be habit-forming.

In the case of the anti-vaccine movement, have mobilized people that are now taking rather extreme views and are now resulting in actual measles showing up in the United States. This is a case where the realm of disinformation has now actually had physical effects on a nation.

It's an interesting example of the reality. As the physiology changes in terms of as you get exposed to this disinformation, what happens is more facts will not change your views or you lose that relating to that other person on the other end as being a fellow human being. That's true as well for those that are trying to deal with trying to bridge these groups together is, you have to embody what President Lincoln said, which is, "I do not like that person. I must get to know them better."

What happens as a result of these different activities with disinformation is, we lose empathy. That will affect both our mental health and our physiology of our health. It's also recognizing that the way you tackle this is really hard because, again, more and more research shows just providing facts will not change people's opinions. If anything, it'll make them dig down further and look for their extreme views to be confirmed - confirmation bias. Saying X is not true, nobody remembers the … (indiscernible, 00:19:10). They just remember the … (indiscernible, 00:19:12).

This is very much like a new field of public health, whether it's called cognitive public health or what. We need to figure out, especially for open societies, how can the social media platforms, the news platforms, the corporations, the public sector or the private sector, NGOs all work together for this because, if we don't, we may find ourselves becoming more autocracies of thought that will then eventually lead to more autocracies of societies as a whole.

Is data science the core of disinformation and information warfare?

Brett Horvath: I think data science is an important part of it, but I think it's really about who can spin up the best learning teams. Usually, the ones that win and evolve are the interdisciplinary learning teams that are organized from a culture and operations standpoint to optimize for learning.

That's the part about machine learning people often forget is the learning thing. Humans who are learning are the ones that encode their learning instincts into those systems. Whether it's AI or augmented intelligence that combines human and machine learning, I think, if you want to win what could be arguably called a war of cognition, how you structure cultures of learning or accelerating learning cultures, embed those into effective technical systems and then build learning, virtuous learning loops between them, those are the groups that win.

Right now, authoritarians have an advantage in this space because of how they're concentrating data. In the case of China or Russia, they have coordination between their public sector, their militaries, and their companies. But this is where open societies, I think, can re-exert their advantage is diversity of thought, open ideas, innovation. That is where it's not just relying on command and control. We have to lean into our advantages and make the most of them. Open, diverse, interdisciplinary cultures and societies can learn better, faster, deeper if we do it right and do it intentionally.

Dr. David Bray: Brett, have these groups ever come after you? Have you ever actually experienced it yourself where they've actually directed a disinformation attack because maybe you uncovered or exposed something they didn't want to have exposed? Have they come after you and tried to do the same thing?

Brett Horvath: Oh, yeah. That's been a fun part of the journey. When we exposed these group of accounts promoting these voter fraud and racially charged narratives in 2018, we thought there'd be some blowback because it was a national story when we exposed it, but nothing happened even though we had taken over the hashtags they were coordinating for a couple of years. But then Politico came to us in February and said, "Could you see any activity going on in the 2020 presidential race?"

We saw these same group of accounts were driving the vast majority of the conversation on Twitter. They were foreign accounts, most of them, and so we exposed it. It went around like crazy. I went on TV. The reporter went on TV.

Then, all of a sudden, for two weeks, we had 50,000 to 200,000 accounts coming at us. They were accusing me of being at the center of a transatlantic conspiracy. They strung together my college volunteering for a microfinance org with some guy I know who was a Silicon Valley investor.

Then Q of QAnon, who is this figure that is very influential in the conspiratorial Web, posted on 8Chan saying that I and Guardians.ai were factious trying to silence the voice of patriotic Americans. Q's whole army came at us and we got lawsuit threats and death threats on our phone, and the reporter did. We had our data partners getting attacked by hackers. That was pretty scary for a little bit.

Dr. David Bray: I do want to emphasize that what Brett is sharing is not an isolated case. This is something that there are different teams within the United States, within Europe, North America. We're also seeing some show up in Australia and other parts of Southeast Asia. This is something that needs to be a rallying call for those individuals who are willing to commit to the combination of data science plus almost like gumshoe detective work and, ultimately, empathy.

You've heard Brett talk about cognition and public health. I come from a background that included bioterrorism preparedness and response. At the time, we were dealing with invisible biological agents. How do you know if the information you're getting is the whole set of what really has occurred, the whole set of the facts, or is it being taken out of context?

The challenge is, with the Internet, we've removed geography. Now, anybody and everybody can deliver you information that may make you feel good or may play to your belief system and likes, but you don't know if it's being taken out of context or if it's meant to polarize you even further and make you part of this challenge.

This is something that I think, much like how we had to grapple with how the world was changing as we became more connected, people started traveling overseas, you had to deal with public health and infectious diseases that were now spreading from different parts of the world because of the connectivity. It's not that we say we didn't want to be connected. Now that we are connected both physically but also through the Internet, it's requiring public sector and private sector organizations to find new ways of working together if we're still going to stay open in this society. When Brett comes back, I think it'll be interesting for him to talk a little bit about, yes, he had sort of that group come after him as a result of what he showed, but how he can move forward, be more resilient as a result of it, and how this is something that any individual organization or community can do as they move forward together.

Michael Krigsman: Yeah, Brett. It'd be very interesting to hear about that and, at the same time, can you address, Brett, the issue of, are we really talking here about data-driven lying at scale? Is that a good way to summarize it?

Brett Horvath: That's part of it is the data-driven lying. I think it's really about making people feel validated, that their world view is real and then building this kind of tone of interaction of self-validating thoughts and ideas.

One of the key things that information warfare plays off of, or disinformation, I don't think is necessarily lies but it's loneliness. If you have a narrative you can offer or a sense of connection, you know, very toxic people spreading divisive stuff on Reddit, a lot of lonely young men who feel disaffected. That's part of it.

Now, in relation to what you do about getting attacked and how you build more resilience, I think one of them is you expect the attacks to happen and you plan for it. Bad actors are laying all sorts of traps trying to get people to engage. Well, when Q came after us, we kind of expected that something like that might happen. We had no idea it would be that big, but it's hard to map these things by getting historical data from Twitter. It takes an act of legal and bureaucratic sorcery to get historical data.

If you expect it when it's coming, you can capture so much data. So, when Q attacked us, we had worked with all of our partners to know what are the keywords, the networks, et cetera, where they're going to come at us. It was kind of like, on a rainy day, we just let the floodwaters come into our reservoirs and we built this great, glorious map of the conspiratorial Web, both foreign and domestic.

We turned the attack by Q into an asset. Having that anti-fragile approach where you're not just resilient to attack but, when you're attacked, it makes you and your community allies stronger, I think is a really important part of good strategy and tactics in this space.

Dr. David Bray: I think the key to what Brett just showed is, it is about almost like jiujutsu of taking whatever energy is thrown your way and finding a way to then find benefit from it. He said a little bit earlier it's about making folks recognize that this is not something about, is it just the left that's having this happen or just the political right that's having this happen? It's really about all of us are experiencing this.

The interesting thing that Brett has experienced, that I have experienced, that others that are working in this space have experienced, there are some countries, more so in Europe or up in other parts of North America, that are more progressive in recognizing that this is a whole of society challenge. We here in the United States, for whatever reason, still seem stuck in different groups thinking that disinformation only affects them and they're not reaching across and recognizing that there is actually disinformation being done on the other side too to try and almost throw water on a grease fire to try and make it even worse.

For societies that are still in that infancy stage of trying to address this challenge, I think the one thing I would ask from a People-Centered Internet perspective is, it's about all of us and it's about all of us addressing this together and recognizing that it's not just one side or the other side. All sides are being intentionally trying to be divided. As Brett mentioned, it's about trying to then figure out, from that, how we can be more resilient and more resolving as we go forward in addressing these issues.

One of the things I would say as an example of countries that are more forward-leaning, we've seen what's called the Latvian elves. The Latvian elves were developed to stand up against the trolls. Basically, these are efforts that say, again, it's not about trying to get your community rallying. If you're feeling righteous indignation, that's probably a case of where someone has polarized you such, through social media or the news that you've been presented, that you're no longer thinking rationally about it and you're no longer open to having the different facts be considered in their full context.

That's what really the Latvian elves are trying to do in Latvia. We're seeing the same things in Estonia and Finland. They obviously have to do that because of their neighbors that they have to deal with that are larger that may be doing these sort of things to them. Looking for more community-based solutions, that's really what we're trying to seek for here as we move forward.

How do we address the problem of disinformation?

Dr. David Bray: I don't think anything is ever too large a problem. I mean that's what I like to do is sort of rush in and try and grapple with it even if it's messy and complex. I think Brett would agree. It starts with first just having conversations about this, greater awareness, and getting different perspectives on the issues, diversity and variety of perspectives will make us stronger. Two, it's saying, "What are the community-centric approaches that are collaborative?" because no one organization or no one group is going to solve this by themselves.

Brett Horvath: I think the place that gives me peace is that the solution starts with a mindset of understanding that a lot of the goals of these efforts are to divide us, to make us afraid of one another. Once you realize that, it actually really feels great where folks in the early 2000s that I would have thought were my political enemies, actually, we can be allies because we have a shared threat and we have a shared opportunity.

We actually start this in a strategic and tactical framework we developed over three years. It starts with three very simple protocols, which is really a way of approaching this space whether you're a member of the public, a CEO, or a skilled practitioner. It's three things.

One is to elevate the conversation usually through increased self-awareness. If you want to elevate the conversation, increase your own self-awareness of your organization, your own biases, how you're getting targeted, et cetera.

Number two is, find the common cause. Finding the common cause is a great source of surprising dividends in power, the people who know how to do it, because we're in a time of political and economic weirding, so there are new coalitions and new opportunities.

The third is to listen for, discover, create, and share the most effective tactics available. Whatever you think works, whatever you think your assumptions are, find the most effective tactics available, and that's meta. We call it the meta-framework.

Elevate the conversation, find the common cause, and look for and create those most effective tactics available. If you just keep coming back to that, it's actually, day-to-day, you end up finding new allies, mobilizing your team in a different way, and you see it as an opportunity for discovery while you're also trying to find some really powerful weapons to take out some bad guys if that's the case.

What advice on disinformation do you have for senior business leaders?

Dr. David Bray: I think, with corporations, it's recognizing that this is a very real issue and, as Brett mentioned, it's no one part of your organization that can solve it. It's not just the IT department. it's not just communications. It's not just marketing. It's really got to be elevated to the C-suite and the board. The more that the C-Suite and the board can have conversations about this as the new reality and this is occurring, and it's going to happen whenever you're going to be doing something that might be an opportunity for people to just polarize and divide things either in a country or divide things in your marketplace, the advice would be to just elevate this board-level conversation.

Brett Horvath: Yeah. I couldn't agree with David more. I was talking with someone the other day and said, "I don't imagine myself as waterproof to influence." A lot of these campaigns, they target getting inside the social media loops or the conversations and discussions of CTOs and CEOs just to change one little thought on an issue.

You have to be thinking of this, fundamentally, as risk and something that cuts across all of your business divisions and opportunities. I would totally agree with that on the corporation side.

What advice on disinformation do you have for journalists and media?

Dr. David Bray: I think that's an interesting question because the challenge is how do they even know what they're pulling from social media or they're pulling from sources is the full context or not necessarily even being targeted. We just saw this week, unfortunately, there's a case where neo-Nazis are actually targeting specific journalists with doxing and so that's a weaponization.

I think, for the media, it really is just recognizing, be skeptical. But also, wherever possible, and I think this is just good practice, try to get all different perspectives and try to do something that's a nuanced article that is not inflammatory of your readers.

Now, the challenge I think we face is that we're seeing print subscriptions decline and a lot of what's driving media now is the advertising base model. The challenge is, we know that, unfortunately, this gets to the other problem, which is, we as people don't tend to read longer, nuanced articles or we don't read past the headline. This may even mean that we need a new model of funding journalists and funneling media in a way that allows them to write the in-depth stories they need to write with necessarily having unconscious or subconscious biases to making things more polarizing and inflammatory.

What advice do you have for leaders, department heads, and business decision-makers inside the government?

Dr. David Bray: In this case, elevate it to the highest levels of the executive branch and legislative branch to recognize this as a challenge. It would also go further to say, public service by its nature is very visible and very open. If it's working well, it needs to be open. The challenge is that that then gives all these different opportunities for weaponization by those people who want to do things that are divisive, take things out of context, and create disinformation.

For those leaders, it's actually the same as with corporations, which is, elevate this as a risk conversation. Also, elevate this as a conversation that it shouldn't be a conversation where you're trying to point fingers or point blame. You're trying to actually encourage an always learning organization where, as Brett mentioned, the midlevel or the worker that sees something that's going on that's odd has the ability to elevate it and say, "I'm seeing something that's odd. It may be something. Maybe we need to focus on it. Maybe we need to address it." More importantly, you're in an "always learning in the public sector" mode.

Then, finally, engage the public in these conversations that this is a concern for all open societies, all open nations because, if not, again, we will become autocracies of thought and, ultimately, autocracies in reality too.

Michael Krigsman: Brett, your thoughts on advice for both folks working in the government, not necessarily elected officials but people in the government, as well as for journalists and what they should be doing in relation to these issues.

Brett Horvath: I think it starts with a common piece of advice for both, which is realizing you're in a domain of intelligence, agencies, influence operations, et cetera. For journalists, the rules of investigative reporting and trying to figure out what's going on are very different and, especially with where the state of news media is, you're just not going to have the resources you need.

Part of the approach that we've developed, we call it an asymmetric strategic and tactical framework because, no matter how big you are, you're not going to have enough resources to fight back against nation-states or corrupt international interests. Learning how to say, if you're a journalist even at the New York Times in a small outfit, you need to figure out how to spin up the right coalition of partners to help you out, how to make them aligned, and that Aikido approach of thinking, now can you use the energy that's already in motion to your favor. That asymmetric approach is the only way you're really going to be able to address that problem.

We've trained journalists to do this. We've worked with journalists in coalitions, both here and abroad. As a reporter, you're trying to build up. You're spinning up a research project. Journalists are actually really good people to bring involved because every time they do a story it's a different learning process. It's a different learning community. The ones that stick around learn some good heuristics and tactics.

Now, for governments, even though you're big, you've got to think in the same asymmetric terms. If you're a big agency or a big military, it's going to be hard to marshal enough tactics, forces, and resources all at once. You've got partner, and you've got to find those most effective tactics available because it's cliché in business books, but the best ideas are probably not inside of your organization's walls. What are you going to do about getting those, vetting those, and incorporating those as fast as possible and then also partnering with people that can execute on them that aren't on your payroll? That's kind of the mandate of an accelerating information warfare or disinformation environment.

Michael Krigsman: Finally, as we finish up, let me ask both of you for your concluding thoughts. We have just a couple of minutes left. David, let me turn it over to you.

Dr. David Bray: The one group we didn't talk about is all of us as members of the public and members of different communities. Really, the ask would be the Internet, in some respects, and this new medium. We didn't even talk about it, what could easily be another show, the fact that some of these things have been around since the 1890s. We had concerns about Yellow Journalism. Remember the Maine incident which may or may not have contributed to causing the Spanish American War.

What's new with these new technologies involving the Internet, artificial intelligence, automation, we have to recognize that we, as humans, have a responsibility to think before we either like, to retweet or repost something, to add emotion to something online. I'm not saying you shouldn't do it if it's something, a cause you believe in, but think about what you're doing because you may be throwing gasoline on that fire or grease on that grease fire and making it even worse.

At the end of the day, it comes upon all of us. Again, embodying that quote from President Lincoln, which said, "I do not like that person. I must get to know them better." If there could be one appeal, it could be, think about community-centered approaches that embrace the diversity of open society and the diversity of thought and, at the same time, have empathy to those people that you may not initially agree with but you can actually find some way to move forward.

Try to find a big enough tent wherever possible. Guardians.ai, if people search on eh Web, you obviously won't find a big footprint, but you will find some articles about them. If folks are interested in engaging with Guardians.ai or engaging with the People-Centered Internet Coalition, we do have a website, peoplecentered.net. We do welcome people to get involved with this community because, at the end of the day, it really is about community-centered approaches to this. What we may be discovering is, we may have spent the last 20 to 30 years developing tighter integrations, but we've missed the needed thing for the next decade, which is, can we develop technologies that empower a diversity of community to work together, play together, and live together in a better way?

Michael Krigsman: Brett, any final, very, very fast thoughts from you before we end?

Brett Horvath: Yeah. Define the common cause with people you wouldn't ordinarily think about working with. Diversity of thought as a way to learn faster and better is one of our greatest strengths. These technologies that kind of get us in our own little filter bubble and these influence campaigns that try and divide us further, if you can invert that and learn from a lot of different sources and partner, whether it's in business or to protect a community or country, that's a great source of power. That's a driver of innovation and how I think the public fights back and wins to defend open societies.

Published Date: Jul 26, 2019

Author: Michael Krigsman

Episode ID: 612