What is SAMbot?
SAMbot is a machine learning bot that detects and tracks abusive sentiment. During Canadian elections, we use SAMbot to collect data and generate insights about the online abuse received by candidates and political parties.
As political discourse is generally at its most abusive during campaigns, SAMbot helps us gain critical insight into the current state of online Canadian political conversations.
SAMbot can help us look at online discussions at a massive scale. To date, we have used SAMbot to analyze millions of comments in federal, provincial, and municipal elections across Canada.
Why do we need SAMbot ?
While it is commonly believed that toxic online spaces are harming our democracy, we do not have sufficient data to shed light on this problem. Our SAMbot findings provide insight into how online conversations are affecting political participation in Canada, as well as working conditions on the digital campaign trail. This data can be used to inform effective policy responses to address online harms.
How does SAMbot work?
SAMbot is a machine learning bot — a software application that runs automated tasks through a type of machine learning called natural language processing. SAMbot monitors all English and French tweets sent to candidates.
The models that SAMbot uses to evaluate language are trained and tested on millions of data points so it can identify content considered toxic, harmful, or insulting. Each time SAMbot is deployed in an election we are able to improve and iterate on the machine learning models it uses. This increases the accuracy of our results.
When SAMbot evaluates a comment, it makes a confidence prediction based on how likely it is that someone would interpret the comment to be abusive. Machine learning allows tools like SAMbot to have an enormous scale. However, language is highly nuanced and SAMbot can never replace human judgment. This is why analysis of SAMbot insights is guided by human beings – members of the Samara Centre team. In using AI for civic inquiry, the presence of human analysis is important to underscore because it helps to counter common assumptions about artificial intelligence being impartial, autonomous, or free from bias. By taking this approach, we strive to demonstrate how AI-driven tools can contribute to civic inquiry in an ethical and productive manner.