top of page
Search
  • disinfolab

Bot Tracking: How DisinfoLab Monitors Inauthentic Activity

Updated: Aug 23, 2021

DisinfoLab is a student-led research lab at the College of William & Mary’s Global Research Institute dedicated to informing American citizens about the threat of disinformation on social media platforms. Disinformation is a major threat to American citizens and democracy. It’s more than just “fake news”—the January 6th attack on the Capitol, voter suppression during the 2020 election, and surges in COVID-19 hospitalizations are all real-world consequences of online disinformation. Because the threat of disinformation is much less obvious than, say, a nuclear bomb or terrorist attack, disinformation is a relatively unexplored field of study, particularly at the undergraduate level. As a result, the American public is grossly uninformed of the harm that disinformation can (and does) inflict. It’s our mission to change that.


Through DisinfoLab, we dedicate exhaustive coverage to issues of disinformation while contributing our own research to the field. For example, our analyst team tracked bot data to expose that the “incriminating” emails on Hunter Biden’s harddrive were part of a disinformation scandal. On the day of the January 6th Capitol insurrection, we revealed that bots were amplifying a narrative that disguised Antifa members started the riot. Most recently, we found that bot accounts amplified Andrew Cuomo’s COVID-19 controversy about the underreporting of nursing home deaths.


DisinfoLab utilizes data tools from two online platforms to track inauthentic activity on Twitter: Trends24 and BotSentinel. Trends24 allows us to view overall trending topics on Twitter by hour. Bot Sentinel provides two bot-tracking tools that highlight the top hashtags and two word phrases tweeted by inauthentic accounts on Twitter by hour. Using data from these two platforms, our analyst team compares potential bot activity to Twitter’s overall trending topics and looks for indicators of disinformation and influence.

Trends24 collects data on the top phrases and hashtags trending on Twitter every hour. Sometimes, these trending topics can be benign––hashtags about a recent sports game, celebrity gossip, etc. What we are interested in tracking are political topics and when they begin trending across Twitter. When these political trends occur, we consult our collected bot activity data to determine how bots are interacting with the news cycle. Are inauthentic accounts breaking the story? Or are they trying to spin or alter an emerging narrative? These central questions help DisinfoLab connect bot activity with the real experiences of Twitter users and, by extension, Americans. The goal is a deeper understanding of how inauthentic accounts control or shift our political conversation.


BotSentinel is an A.I. program that analyzes, categorizes, and tracks inauthentic Twitter accounts. BotSentinel is free and offers a browser extension which enables users to see the site’s real-time analysis of Twitter accounts in his or her feed. Using artificial intelligence and machine learning, BotSentinel generates authenticity ratings for all Twitter accounts. These ratings are not binary––they do not directly verify whether an account is a “bot” account or a “real” person. Instead, their system rates each account on a scale from one to one hundred and then categorizes each account as either normal, questionable, disruptive, or problematic. These ratings are developed by analyzing how often a Twitter user engages in activities that violate Twitter’s guidelines. Using this data, BotSentinel then formulates a list of the top hashtags, two word phrases, URLs, and mentions tweeted by inauthentic accounts, which are updated every hour. More information about how BotSentinel analyses accounts can be found on their website’s About page.


If a hashtag is flagged as trending among inauthentic accounts, it is not automatically considered “disinformation,” nor is the account automatically considered inauthentic. Even though BotSentinel might identify an account as “questionable” or “problematic,” that score does not mean that the account is run by a bot or a Russian troll. However, it does indicate that the account consistently engages in disinformation-like behavior that violates Twitter’s code of conduct, such as trolling, deception, and sowing division.

BotSentinel is a reliable tool that DisinfoLab has selected for its clear criteria and accessibility. BotSentinel allows our analysts to identify possible inauthentic accounts, many of which seem genuine, but an in-depth analysis of their history by BotSentinel’s A.I. algorithm indicates otherwise. For a given hashtag, vetting BotSentinel data can reveal if the story presented alongside the tag is the same or is distinct from others. By investigating trends and using BotSentinel’s live Twitter feed analysis, DisinfoLab is able to gain a better understanding of how a narrative develops. BotSentinel’s platform and tools are free and easy to use, making larger-scale analysis possible for our team.

Our analyst team processes and analyzes Trends24 and BotSentinel using a script written by analyst Aaraj Vij. His script collects the trending hashtag and topic information from bot sites as well as data from BotSentinel regarding precisely how many times a phrase or hashtag was tweeted by inauthentic accounts during any hour of the day. From there, the data are aggregated into spreadsheets where we can better code and visualize trends throughout the day. Data is coded according to a team code book. Once data has been collected and coded, the Social Media Analyst team generates reports on meaningful trends.


While BotSentinel is at the core of our daily bot-tracking analysis, our analyst team supplements the trending lists generated by BotSentinel and Trends24 with our own knowledge of contemporary research on disinformation, Google trends data, trending news stories, and fringe political movements. Incorporating this trending data allows for a context-based analysis of when inauthentic activity is correlated with popular, authentic activity on the platform. Paying attention to what bots are interested in (and what they are not interested in) can help determine which social/political groups inauthentic activity aims to mislead.


Since the inception of DisinfoLab, our team’s methods have changed and grown, presenting a more complete picture of bot activity on Twitter. As the lab looks to continue developing and expanding its capabilities, it plans to continue adjusting data collection methods––yielding data that can power the fast-paced and meaningful analysis at the center of the lab’s vision.




91 views0 comments

Recent Posts

See All

Resisting Russian and Chinese Disinformation Campaigns

Sarah Wozniak Russia and China are actively creating and disseminating disinformation narratives in Ukraine and Taiwan to pursue their policy agendas and expand their spheres of influence. Both target

Join Our Team! DisinfoLab Internship

DisinfoLab Internship DisinfoLab is a student-run disinformation and technology think tank and tech incubator at William & Mary. We use a multidisciplinary approach to investigate emerging trends in d

bottom of page