Samara's Democracy 360, a report card on the state of Canada's democracy, focuses on the complex relationship between citizens and political leadership. With the understanding that democracy is about more than just casting a ballot every four years, any conversation about how decisions are taken on the future of our country needs to consider a more robust definition of "everyday democracy."
Samara's Democracy 360 expands the measurement of democracy and kick-starts a conversation using measurable indicators focused on three areas essential to a healthy democracy: communication, participation and political leadership. That is: talking, acting and leading.
Democracy 360 brings together a number of data sources, such as Samara's public opinion research and website content analyses, as well as publicly available data from other sources, including the House of Commons and Elections Canada. As such, it is designed to be a thorough, yet manageable, look at the health of citizens' relationship with politics, and one that was repeated in 2017 in time for Canada’s 150th birthday.
In an effort to set a benchmark that prompts reflection and discussion, Samara has awarded an overall letter grade as well as a letter grade for each of the three areas, as outlined in this report.
The Result
What does C mean? Quite simply our democracy is not doing as well as a country as rich as Canada deserves. Canadians are not participating in politics as much as they could, they don’t believe it affects them, and they don't see their leaders as influential or efficacious. To turn this situation around, Canada requires more than just higher voter turnout. Canada requires a culture shift towards "everyday democracy," in which citizens feel politics is a way to make change in the country and their voices are heard.
Canadians don’t trust Members of Parliament or political parties and believe they largely fail to perform their core jobs:
Politics is seen as irrelevant and, as a result, Canadians are withdrawing from the democratic system:
To make politics relevant, Canadians will need to see the value in politics and democracy. This will require the following changes:
Despite an overall unhealthy picture, the Democracy 360 also reveals several positive signs on which to build:
An election in 2015 presents a real opportunity to build momentum towards a more engaging political culture:
Samara’s Democracy 360 uses quantifiable indicators to focus on three areas that are essential to a healthy democracy: communication, participation and political leadership.
The indicators measured in this report track Canadian democracy across a wide range of areas, from diversity in the House of Commons to the many ways Canadians can participate in politics to how Members of Parliament and parties function. While not exhaustive, together the indicators paint a rich picture of the way that Canadians talk, act and lead in politics, adding multiple dimensions to voter turnout, the metric most commonly used to measure democracy.The Methodology
What is Samara’s Democracy 360?
The Democracy 360 is Samara Canada’s made-in-Canada report card focused on the relationship between citizens and political leadership. The Democracy 360 combines 23 quantifiable indicators, focused on three areas: communication, participation and political leadership. The Democracy 360 will allow Canadians to compare and assess their democracy over time. Samara plans to revisit the Democracy 360 every two years to measure improvement or decline—the second edition of the report is due out in 2017.
Where Did the Idea for the Democracy 360 Come From?
During Samara’s exit interviews in 2009, a former MP identified a niche: Canadians need a more systematic understanding of how well our democracy is working—one that doesn’t rely on anecdote. The Democracy 360’s conceptual design, data collection and analysis were subsequently conducted over the next few years.
Samara has three main objectives with the Democracy 360:
Why the Name “the Democracy 360”?
The circle echoed by the title’s “360” draws on several themes: to circle back and take stock, to provide a 360-degree scan, and to draw attention to a useful metaphor that highlights the interdependence between citizens and their elected leaders—the “vicious circle” and the “virtuous circle”—when it comes to building a responsive democracy.
How Did the Research Unfold?
The design, data collection, analysis and scoring process has involved advice from academics, practitioners in the civic and political engagement space, and Samara’s community. A few of the methodological components and findings have also been presented at academic conferences by Samara researchers, and many elements of the Democracy 360 were tested through the research and release of Samara’s Democracy Reports from 2012 to 2014.
Though the relationship between citizens, MPs and political parties has always been at the core of the Democracy 360, how to organize this analysis has generated a great deal of discussion. Initially, the Canadian Democratic Audit series inspired focus on measuring democratic values like representativeness, inclusiveness and participation. Though these values underpin many of the 360 indicators, we decided to organize the Democracy 360 by three areas of activity: communication, participation and leadership.
What Does the Democracy 360 Measure and Not Measure?
Quality: The benefit of the Democracy 360’s evaluation is in its breadth, not the depth of its indicators. For many of the indicators, for example “householders,” measuring the quality presents a challenge. While this level of analysis is important, it is beyond the scope of this broader project. Thus, the Democracy 360 focuses on whether the activity did, or did not, happen. This measurement choice was made in order to establish a benchmark. We encourage future research to probe deeper within each measure (such as quality or frequency).
All democratic players: By focusing on elected political leadership, the Democracy 360 misses the full complexity of democracy, including the work of senators, public servants, political staff, journalists and judiciary—a trade-off made to keep the scope of the Democracy 360 project feasible. Samara also believes that the relationship between the citizen and their representative is at the heart of democracy, which is why other projects, like the Samara MP Exit Interviews have also probed the nature of this representative relationship in Canada.
Municipal and Provincial Political Leadership: The Democracy 360 does not systematically evaluate provincial and municipal political leadership in the same way that it focuses on federal leadership. However, some measures of political participation and Canadians’ views on politics are not specific to the federal arena. Admittedly, this is conceptually less tidy but also reflects the realities of how citizens think about politics as observed in Samara’s focus group research—politics as an activity is not neatly demarcated by three levels of governance for many Canadians.
How Were the Indicators In the Democracy 360 Selected?
With a long list of potential indicators, five criteria were used to select the indicators which measure communication, participation and leadership in Canada:
Where Did the Democracy 360 Data Come From?
The Democracy 360 brings together a number of data sources, including Samara’s public opinion research and website content analyses, as well as data external to Samara, such as from the House of Commons and Elections Canada. The data in the 2014 Democracy 360 dates from 2011 to 2014. There are four general sources of data in report card:
Elections Canada’s Estimation of Voter Turnout by Age Group and Gender at the 2011 Federal General Election reports turnout by age, gender and province. Elections Canada relies on their administrative data to generate these figures. To improve the accuracy of turnout figures, Elections Canada uses the number of Canadian citizens over the voting age as the denominator in calculations rather than the number of registered voters (which recognizes the voter registration list may be incomplete).
Why Create a Composite Score?
Bringing together indicators into an overall composite value, index or “score” is an inherently subjective activity—read Malcolm Gladwell on this point. Inevitably, there are a number of choices to be made; whether certain indicators should be given greater weight because they matter more, or if all indicators can all be treated the same. Seldom is there an obvious or “correct” way to combine such values that leads to complete agreement.
Despite the challenges associated with composite scores, they can still be worthwhile constructs because of the other advantages they offer, such as ease of comprehension and comparison.
Comprehension: With over twenty indicators, each indicator on their own may mean little to the next, or, in other words, it is difficult to see the overall picture with several data points in play. Bringing the values together can assist with this challenge. With that said, the combination can also obscure differences among indicator values (especially large ones), which is why Samara published “The Democracy 360: The Numbers” which shows the values for each of the 23 indicators.
Comparison: A composite score is a powerful tool for comparison. Some indicators may improve or decline in any given year, but these changes provide little insight into the overall picture. A composite score can reveal whether, on the whole, there is improvement or decline over time. As well, when the three areas (communication, participation and political leadership) each have a composition score, it is easier to see which area is faring better than another.
How Was a Composite Score Generated?
Each of the indicators in the Democracy 360 are scored out of 100; the lower the score, the worse the result. While the scale allows for extremes—a perfect score of 100 or zero—it’s unlikely that some indicators will ever reach the upper and lower limits. Nevertheless, the use of a 100-point scale across all indicators provides consistency and reduces subjectivity, compared to creating a uniquely adjusted point-scale sensitive to the unique variation in each indicator.
The structure of the Democracy 360 is nested and hierarchical. The three areas (communication, participation and leadership), as well as the indicators within each area, were weighted equally. The weight applied to each indicator is dependent on the number of indicators within each area. As for the indicators with sub-indicators, the indicator reflects the average of all the sub-indicators.
The Democracy 360’s weighting scheme is based in conceptual rather than statistical theory—or, in other words, it focuses on the three key areas of democracy that Samara has identified. However, researchers at Samara did conduct a factor analysis of the indicators that rely on survey data alone, since this process required the omission of the three other data sources, the results were revealing rather than directive.
Given the Democracy 360’s conceptual focus, Samara determined that each category would be treated as equally important. This meant the composite score applies an equal weighting scheme to each of the three areas and an equal weighting scheme within each area to each of its indicators.
Scores for each of the three areas (communication, participation and political leadership) are calculated by averaging the scores of the area’s indicators. The three areas (each weighted equally) were added to create a total score out of 100.
Communication + Participation + Leadership = Samara 360
Score out of 33 + Score out of 33 + Score out of 33 = Samara 360 (out of 100)
Communication
Participation
Leadership
Total
Why Give a Letter Grade In the Democracy 360?
Letter grades were not part of the original conception of the Democracy 360, but advice from communications experts suggested a report card component would be a valuable tool to help Canadians understand the data. This reflects a trade-off between adding another layer of subjectivity to the data, but also ensuring the research is made accessible and useful to Canadians.
How Was a Letter Grade Assigned?
Deciding on a letter grade scale was more complicated than simply applying the grading system familiar to most school settings (e.g. A above 80%, F below 50%). Communications advice suggested that an overly negative report card (with several Fs), year over year, would be discouraging and risk greater alienation from the political arena and political participation if Canadians do not see hope for improvement. The advice also suggested the report card would be less effective as a communications tool if the grades were unlikely to change year over year. Given the mix of indicators in the Democracy 360, it is reasonable to anticipate small shifts in improvement or decline, but not large shifts (that is to say 10% points or more) in two years time.
This meant Samara needed a letter grade scale where (1) the value of an F grade would need to be lower than the commonly used 50% threshold and (2) each letter grade should cover a fairly narrow numerical range so that there would be greater sensitivity to changes in the letter grades assigned.
To design this grade system, Samara’s researchers first sought some outside input from a small group of experts, practitioners and engaged citizens in the democratic space to help determine where the Democracy 360 scores are now, in relation to where they could be if Canada’s democracy was stronger in 10 years time. Specifically, they each provided their opinion of what a “great” score for each indicator in roughly 10 years would look like. The group did not have access to the current Democracy 360 values though they were provided some historical and comparative data as reference points.
What Is a “Good” Grade?
Indicator values provided by the outside consultants were combined using the same weighing scheme for the composite scores. Overall, their input suggested a goalpost for the Democracy 360 of 60%, which is 12 percentage points higher than the 2014 score of 48%.
The goalposts for the three areas (communication, participation and leadership) helped to establish the upper limit for the letter grade scale. For example, the 70% goalpost for communication would correspond with the highest grade on this scale (A+).
Inherently, determining the rest of the scale values was arbitrary—but was driven by a desire to have each letter value be cover about the same numerical range (that is to say two to three points).