This election is about the future of our country and our democracy. Canadians need a way to understand how politics is playing out online.

Subscribe today to receive weekly, non-partisan updates:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In this period of global democratic backsliding, we are facing urgent and complex challenges related to accessing accurate and trustworthy political information online.

VERIFIED monitors the online federal election conversation in order to help Canadians understand how politics is playing out across digital spaces.

VERIFIED expands on our SAMbot project and broader efforts to understand how technology is influencing our democracy. It explores diverse online civic conversations across different platforms to better understand:

  • Where information threats such as misinformation, bots and foreign interference are present
  • What attracts high engagement, and high levels of abuse
  • Places where positive civic conversation and engagement are happening online
Glossary

Affective polarization
Defined by researchers as “a trend where citizens develop a strong affective connection toward their own political side, while increasingly disliking and feeling animosity toward people with opposing political allegiances.”

Astroturfing
The practice of hiding the sponsors of a message to make it appear as though the message originates from, and is supported by, grassroots participants, in an attempt to manipulate public opinion.

Bot accounts
Accounts operated automatically or en masse, often with the intention of skewing online discussion to particular ends.

Inauthentic engagement
Online activity that is the product of inauthentic use, such as posts by bot accounts, fake user engagement, or artificial amplification. Inauthentic engagement may be coordinated in attempts by either domestic or foreign actors to influence Canadian political processes or, more generally, to sow discord and confusion online. However, as AI-generated content becomes more sophisticated and easily accessible, it is increasingly difficult to distinguish between authentic and artificially generated or synthetic content.

Power abusers
To highlight the link between high-volume posting behaviours and online abuse, we use the term “power abusers” to suggest that high-volume social media users are also very likely to post high volumes of abuse.

Power users
Users who post frequently on social media. While some power users may be real users who simply post frequently, others may be bot accounts posting at rates not humanly possible in an attempt to skew online discussions.

Synthetic content
Online content, in the form of text, image, or audio, that is fully or partially artificially altered or generated.

Why are we looking at Reddit?

There are fewer and fewer major platforms where Canadians can reliably share and discuss news online. Changes to the online news landscape in Canada has made Reddit a particularly important platform for Canadian civic engagement - although it has been one of the largest platforms among Canadians for a long time. 

With Meta blocking news content for Canadians on their platforms (Facebook and Instagram) following the passage of the Online News Act, and with bots and other forms of inauthentic engagement commonplace on X (formerly Twitter) - and all three platforms significantly cutting their trust and safety teams and intentionally reducing their moderation efforts, these major platforms do not offer a healthy news environment for Canadian users.

While Meta platforms used to be the largest platforms where Canadians shared news, X (formerly Twitter) has also been a key online platform that Canadians have used to share news content online for years. The preliminary data shared from the Toronto Metropolitan University’s Social Media Lab’s State of Social Media report shows that while all other major platforms have seen growing userbases, X is now seeing a decline in Canadian users - showing us that Canadians are choosing in part to abandon the platform after its drastic changes.

These changes in the online media landscape means that Reddit has become a more important platform than ever before, as, within the most popular social media platforms among Canadians, it’s the premier platform for linking to, sharing, and discussing print news (X allows this as well, but due to the state of its moderation practices and the ownership’s active disdain for Canadian sovereignty, we do not feel like it could be considered a healthy place for Canadian news or civic discussions).

Reddit use among Canadians has grown significantly in recent years - the same report conducted by the TMU Social Media Lab in 2022 and 2025 shows that Reddit climbed from having 19% of Canadians having a Reddit account to 27% between those years.

Although only 27% of Canadians report having a Reddit account, it’s important to note that Reddit is a highly public platform. Users can view subreddits and read discussions on the platform without an account, and many do. Reddit discussions also show up frequently in online searches, particularly on Google.

Why are we looking at Bluesky?

Bluesky is a platform that has quickly grown in popularity in Canada, particularly in the last few months. While Canadian users make up a small percentage of the population (some estimates put it at slightly below 2% of the population compared to X, which 37% of Canadians have an account on), the global userbase is growing rapidly. Since the U.S. election in November, 15 million users have joined Bluesky, pushing the userbase over 35 million users. In contrast, the X userbase has been declining both globally and in Canada. A number of public figures (including Canadian politicians) have explicitly left X for Bluesky, citing their dissatisfaction with the recent changes to content moderation, as well as anger at the actions of X’s owner, Elon Musk. 

One of the reasons Bluesky has been embraced by those leaving X is its user interface - Bluesky’s feed has a similar look to X, and you can search and follow users in the same way as on X. However, there are also significant differences between the platforms.  Unlike X, Bluesky uses an open source framework, which means increased transparency - users can see how the Bluesky protocol is built, and what is being developed. Developers can create consumer-facing apps on Bluesky - custom tools that users can adopt to curate their feeds and enhance their interactions. These tools can be used to add on to existing trust and safety features - which is another difference between X and Bluesky - they have very different approaches to content moderation.

We have chosen to track online political conversation on Bluesky to see whether the platform’s approach to content moderation produces different types of civic discourse than what we have previously tracked on X. We are also interested in Bluesky because of its potential to be an important source for online news sharing in the wake of Meta blocking news content and with some users leaving X. Bluesky is actively trying to attract news publishers to their site and offering incentives - they created a subdomain specifically so publishers can track when visitors come to their site through a link shared on Bluesky. So far, this approach appears to create traffic and conversion rates higher than Threads and X for international news publishers, despite Bluesky’s much smaller user base. Because Threads cannot share news in Canada, Bluesky has a particular opportunity to be an important platform to drive traffic to news publishers.

Why are we looking at YouTube?

YouTube is the second most popular social media platform among Canadians, and it is particularly popular among young Canadians. Due to the Meta blocking new content for Canadians, Youtube has surpassed Facebook as the leading platform that Canadians turn to as a news source: 29% of Canadians saying they use YouTube for news each week. YouTube is also often used as a teaching tool - teachers across the country use it in their classrooms

YouTube is an important platform to watch not only for its popularity, but also because fact-checking organisations have claimed that the platform allows for the spread of global online disinformation, particularly when it comes to election disinformation and public health issues. Its recommendation algorithm has been shown to amplify extremist content and misinformation. Although YouTube reportedly altered its algorithm to address these concerns in recent years, to some critics it remains a “repository” of harmful content that can then be shared across multiple other social media platforms and messaging apps. 

Trust and Safety on Social Media Platforms

Most social media companies have trust and safety policies and teams dedicated to keeping online spaces safe for everyone. This can involve policies around hateful speech and misinformation, content moderation, and illegal content. In theory, the intent is to protect users from abuse and online harms, and address growing concerns about foreign interference on social media platforms. How algorithms and content recommendation systems function is also a part of trust and safety practices as it shapes the kind of content that users see - and can either promote or suppress divisive content. For example, research has shown that social media algorithms have pushed misogynistic and self-harm content to young users.

The roll-back of trust and safety measures
Recent years have seen the rollback of a number of trust and safety measures across multiple platforms. It started with Elon Musk’s takeover of Twitter - which he later rebranded as X - in 2022. Musk’s takeover resulted in the disbanding of the ‘Trust and Safety Council’ that advised on online safety and content moderation, a 80% reduction in the number of trust and safety engineers, and a step back from a number of misinformation policies. The onus on fact-checking is now mostly on users through Community Notes, where contributors can leave notes providing additional context on any post. Research suggests that since Musk’s takeover, hate speech has significantly increased on X, with inauthentic behaviour potentially increasing as well. 

Similar changes have been instituted at Meta. In January 2025, Meta ended its fact-checking program across its platforms (Facebook, Instagram, and Threads), which was originally designed to prevent the spread of misinformation. It also announced changes to its hate speech policy, which critics point out allows hateful content to be targeted towards transgender people, immigrants, and women. Similar to X, it has introduced Community Notes, shifting fact-checking and content moderation onto users. Research has been mixed on how effective Community Notes are in reducing misinformation, especially when it comes to moderating humour and satire

At the same time that Meta has rolled back third-party fact-checking, it has also expanded a content monetization program that pays creators bonuses when their content goes viral. Critics argue that this may lead to an increase in false and incendiary stories on Meta platforms - precisely because they draw high engagement. 

For its part, YouTube has a hate speech policy, banning harmful content targeted towards a number of groups. However, critics point out that the platform quietly removed “gender identity and expression” from this policy after Donald Trump’s inauguration, making transgender and nonbinary users less protected from abuse. YouTube has said that its hate speech policies have not changed. 

These changes have been cause for alarm for observers since research suggests that cutbacks to trust and safety teams may reduce a platform’s capacity to respond to online harms, and that online misinformation can fuel real-world violence

Community moderation, anti-toxicity tools, local digital spaces 
Other platforms take different approaches to trust and safety and content moderation. Reddit, for example, is a largely community moderated platform, with multiple layers. Each subreddit is run by different teams of moderators, and each subreddit has different rules that are enforced by those respective moderation teams. In addition, the platform-wide Reddit Content Policy applies to all content on the platform. On the individual level, Reddit users or redditors can upvote, downvote, or report content to moderators. Reddit also has a number of automated tools that remove abusive content. 

This does mean, however, that moderation of subreddits is largely dependent on whoever  “owns” (i.e. whoever created) the subreddit in the first place, unless they relinquish it to another user/group or if Reddit admins step in and force an ownership change (typically only done in cases where moderators were permitting their users to break Reddit’s terms of service/the law or breaking them themselves). Subreddit owners and moderators therefore have significant editorial control, and can choose to permit hateful or abusive content, as long as it doesn’t violate Reddit’s larger content policy.

Bluesky’s trust and safety team has developed a number of features designed to reduce toxicity. For example, users can detach their original post when someone quotes it, which can help them maintain control over the conversation and prevent dogpiling (when a large number of coordinated accounts attack a social media user). Users are also able to hide replies on their posts and change their notification filters so they only receive updates from people they know. These design features have the potential to support more prosocial interactions, but they are not a silver bullet to address harmful online content: when Bluesky added over 3 million users in one week, they received 42,000 reports of moderation policy violations in one 24hr period. In response, they had to quadruple the size of its Trust and Safety team.

In addition to building out their team, Bluesky is also experimenting with building additional tools to limit harassment, detect toxicity and scams, introduce geography-specific labels, and design video content for safety. One example is the CLR:SKY interface tool designed to shift the tone of civic conversation on the platform using perspective API technology. 

There are also smaller-scale digital spaces that prioritize trust and safety for specific communities, and can tailor their approach accordingly. For example, Front Porch Forum is a family-owned Vermont public benefit corporation that aims to build local communities, with users required to give their street address during registration. Due to the smaller scale of this digital platform, moderators (who are all based in Vermont) review all content before it is published.

News Sharing on Social Media Platforms

2025 social media users (18+) data from the TMU Social Media Lab

Methodology

VERIFIED collects data at scale from YouTube, Reddit and Bluesky, using AI-facilitated semantic analysis tools to support analysis and reporting on the quality of online civic conversations across these platforms. The content that we monitor is analyzed against five abuse categories using a machine learning tool called Perspective API:

Perspective API provides a confidence prediction to assess whether a tweet meets an abuse category. When a piece of text is evaluated, it is given a score from 0% to 100% for each category, based on how certain the machine learning model is that the tweet meets that abuse category. If the tweet is assessed as >=70% likely to meet an abuse category, we determine that the tweet has met the criteria. If a tweet meets at least one of the five abuse categories at the >=70% confidence interval, it is counted as an abusive tweet. The abusive tweet category serves to aggregate all tweets that meet at least one abuse category.

We use a >=70% confidence interval as it’s consistent with what is recommended for social science research.

Federal Election Report 1

Power Users Dominate the Discussion on r/Canada

We explored the political conversation on Reddit, specifically the largest Canadian subreddit, r/Canada, in the days leading up to the federal election being called.

Federal Election Report 2

What’s Canada’s biggest subreddit talking about?

For this report, we analyzed 56,136 Reddit comments made on r/Canada between March 24, 2025 at 00:00 ET and March 27, 2025 at 23:59 ET, made on 278 different r/Canada submissions. This period encompasses the first four days of the 2025 federal election period.

Federal Election Report 1

Power Users Dominate the Discussion on r/Canada

We explored the political conversation on Reddit, specifically the largest Canadian subreddit, r/Canada, in the days leading up to the federal election being called.

Glossary

Affective polarization

Defined by researchers as “a trend where citizens develop a strong affective connection toward their own political side, while increasingly disliking and feeling animosity toward people with opposing political allegiances.”

Astroturfing

The practice of hiding the sponsors of a message to make it appear as though the message originates from, and is supported by, grassroots participants, in an attempt to manipulate public opinion.

Bot accounts

Accounts operated automatically or en masse, often with the intention of skewing online discussion to particular ends.

Inauthentic engagement

Online activity that is the product of inauthentic use, such as posts by bot accounts, fake user engagement, or artificial amplification. Inauthentic engagement may be coordinated in attempts by either domestic or foreign actors to influence Canadian political processes or, more generally, to sow discord and confusion online. However, as AI-generated content becomes more sophisticated and easily accessible, it is increasingly difficult to distinguish between authentic and artificially generated or synthetic content.

Power abusers

To highlight the link between high-volume posting behaviours and online abuse, we use the term “power abusers” to suggest that high-volume social media users are also very likely to post high volumes of abuse.

Power users

Users who post frequently on social media. While some power users may be real users who simply post frequently, others may be bot accounts posting at rates not humanly possible in an attempt to skew online discussions.

Synthetic content

Online content, in the form of text, image, or audio, that is fully or partially artificially altered or generated.

Sign up for updates from Verified.

By clicking Sign Up you're confirming that you agree with our Terms of Service.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Verified is supported by donations from active citizens and following funders:

We recognize support from the Inspirit Foundation. We also acknowledge the support of the Canadian Race Relations Foundation with funding provided by the Government of Canada.

This election is about the future of our country and our democracy. Canadians need a way to understand how politics is playing out online.

Subscribe today to receive weekly, non-partisan updates:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

This election is about the future of our country and our democracy. Canadians need a way to understand how politics is playing out online.

Subscribe today to receive weekly, non-partisan updates:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What’s true, what’s noise, and what matters?

Most people encounter political content on social media but when fake accounts and bots flood that space, it’s hard to know what’s real and what actually matters.

In a time when democracy is increasingly at risk, Canadians need a non-partisan way to understand how politics is playing out online.
Verified cuts through the noise, breaking down online election content and highlighting where misinformation or foreign interference may show up—so people can feel informed and confident heading into this year’s federal election.

Each week, Verified will deliver a snapshot of the online election conversation, helping Canadians spot information threats, stay informed, and feel more confident navigating the digital side of politics.

Sign up for updates from Verified.

By clicking Sign Up you're confirming that you agree with our Terms of Service.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Verified is supported by donations from active citizens and following funders:

We recognize support from the Inspirit Foundation. We also acknowledge the support of the Canadian Race Relations Foundation with funding provided by the Government of Canada.