
Spreading like wildfire: Spears Business researcher using analytics to combat misinformation
Thursday, November 13, 2025
Media Contact: Stephen Howard | Director of Marketing & Communications | 405.744.4363 | stephen.howard@okstate.edu
On the afternoon of March 14, 2025, hurricane-force winds and dry conditions turned a spark in a field in rural Payne County, Oklahoma, into a wildfire. Over the next 12 hours, that blaze consumed 26,301 acres — larger than Disney World — and nearly 200 houses.
Stillwater, home to Oklahoma State University, is in the heart of Payne County. Its nearly 50,000 residents were clamoring for accurate information as the fires started devouring property on the west side of town.
Many turned to social media, where they found maps detailing the spread of the fires and official evacuation plans shared by friends and family. What those people didn’t realize is that the platform’s algorithm was designed to prioritize engaging content over timely updates. The maps and information were multiple generations old by that point, and in reality, the fire was closing in.
In disaster situations, accurate and timely information can mean the difference between life and death. For Dr. Kayla Jiang, situations like this illustrate why her research matters. The Spears School of Business Department of Management Science and Information Systems assistant professor uses data analytics and artificial intelligence to study how misinformation spreads during crises, and more importantly, how communities can stop it before lives are put at risk.

“Misinformation spreads really fast, even faster than the truth,” Jiang said. “We want to kill the misinformation in its infancy before it spreads beyond control.”
The difference between misinformation and disinformation often comes down to intent. Misinformation is false material shared without malicious purpose, like a well-meaning neighbor sharing an outdated evacuation map. Disinformation, however, is deliberately crafted with intent to deceive. Both forms have become increasingly sophisticated and widespread.
Most people recognize false information in familiar contexts like election fraud claims, conspiracy theories on social media or deceptive websites. Yet Jiang’s research reveals the phenomenon extends far beyond politics and current events. False information now thrives in seemingly mundane spaces like fake product reviews on Amazon designed to boost sales, fabricated restaurant ratings that influence where families choose to eat and misleading health claims spreading through online community groups.
When people struggle to distinguish reliable information from false claims, they find it difficult to make informed decisions about everything from emergency evacuations to health decisions and even dinner plans. Each of these examples leads to the erosion of public trust.
Jiang’s research also revealed that misinformation spreads differently across communities. In rural areas with limited internet access, false information often circulates through traditional media outlets like radio and television, as well as tight-knit social networks like church groups and neighborhood connections. This creates a unique challenge. The same social cohesion that strengthens rural communities can also make misinformation harder to counter because it originates from trusted sources.
“Community really matters when it comes to battling misinformation,” Jiang said. “In rural areas, we should rely on community-based approaches to help people identify and correct false information. This involves engaging with the trusted local voices like journalists, pastors and other community leaders.”
Jiang believes that media literacy programs in communities and schools might be the biggest key to battling misinformation. Her research shows false information often succeeds because it provokes strong emotions like anger and fear, which make people more likely to share content before verifying its accuracy. Headlines designed to provoke outrage or panic are often crafted specifically to bypass critical thinking and encourage immediate sharing. The solution involves teaching people to recognize these emotional triggers.
A few simple verification steps can be the difference, Jiang said. People should check the author’s credentials, read full articles rather than just headlines, then cross-reference information across multiple reliable sources. Most importantly, she emphasizes the “pause principle” of taking a moment to verify information before hitting the share button.
Next, Jiang advocates using AI to flag suspicious content while relying on trained humans to provide context and confirm accuracy. Some platforms have already begun experimenting with these approaches. Facebook now prompts some users to read articles before sharing them, while X (formerly Twitter) relies on Community Notes, where users can flag and fact-check suspicious content. Still, Jiang believes these efforts must be more systematic and community-focused, particularly in states like Oklahoma, where rural and urban populations receive information through different channels.
Most importantly, Jiang said, advanced preparation is far more effective than crisis-response implementation. Communities need established verification networks and trusted information channels in place long before emergencies strike. Residents need to know where to find reliable updates and be trained to recognize unreliable sources before the next life-threatening situation arises and emotions are running high.
In a world where misinformation travels faster than flames, preparation is the best defense.
Story by: Stephen Howard | Discover@Spears Magazine
Photos by: Mitchell Alcala and Adam Luther