Is Political Science Blowing Its Close-Up?
[Image courtesy of zazzle]
Political science – and the evidence-driven analysis it embraces – continues to reach new highs in popularity. Campaign coverage has changed considerably in the last few years, with the traditional horse-race reporting now sharing the headlines with analysis that uses poll averages and greater attention to demographics to create election forecasts. With this change, political scientists (for whom the new focus is their stock in trade) have also seen their visibility and popularity skyrocket.
But two recent stories in the election realm raise a cautionary tale about what happens if we’re not careful (as practitioners or consumers) about the use of political science to learn more about elections.
The first comes from Montana, where researchers interested in studying how partisan cues might affect voting in nonpartisan judicial races have thrown the state into a frenzy. The problem is their use of a mailer that (perhaps illegally and certainly ill-advisedly) uses the Great Seal of Montana. [It looks like similar mailers have gone to California and New Hampshire too.] That piece, which critics claim is misleading to voters, is leading many people inside and outside of Montana to worry that the mailer (and the resulting controversy) could end up having an impact on the outcome. Worse, one defender of the project has sought to justify the mailer’s impact on Montana by suggesting that the concept of nonpartisan judicial elections isn’t such a good idea in the first place.
The second story involves a recent guest posting about non-citizen voting on the Washington Post’s Monkey Cage blog. The Monkey Cage – a well-respected political science blog – recently joined the Post, and has become a terrific source of political science-driven analysis and commentary on a wide range of issues. In their recent guest post, however, two researchers who have been studying non-citizen voting claimed that their analysis suggests that non-citizen voting is higher than previously thought and could be skewing outcomes. Not surprisingly, both sides in the ongoing “voting wars” (trademark Rick Hasen) have seized on the piece (as of midday Tuesday it had more than 3,000 comments) and it will be a centerpiece in voter ID and proof-of-citizenship fights for years to come. But several academics and analysts who are familiar with the data used in the study are saying that it doesn’t necessarily support the conclusions reached AND that even if it did that would indicate the need to study further, not publish the results.
This is the part where I remind you that I have been beating the drum for years about the need for more field experiments (like in Montana) that – by their very definition – look to measure the effect of a studied practice on voter behavior, and data-driven analysis (like in the Monkey Cage post). Properly executed, they are a powerful force for change in election administration and a means to rise above rhetoric and partisanship in shaping election policy.
But I would suggest that neither of these projects was properly executed.
In my humble opinion, both projects crossed the line by going beyond studying the election process to (deliberately or not) directly affecting election outcomes, specifically:
1) the Montana project by sending an apparently official mailer, without any apparent notice to or consultation with the state, that makes a value judgment (or at least a methodological one) about the role of partisanship in voter decisions. That leads to the spectre of the experiment affecting not just behavior (what do individuals do with this information?) but outcomes (changing the tenor of the race as a result). Moreover, as a colleague has observed privately, it undermines the public’s confidence in a governmental entity whose activity should be nonpartisan; and
2) the Monkey Cage post by suggesting (apparently prematurely, based on the data) that non-citizen voting could affect outcomes in next week’s election. Another colleague also suggests the timing makes it too current for its own good, like the latest study about over-eating that’s released the week before Thanksgiving.
Given the currently sky-high and bitter partisan divisions over election policy in this country, both projects are the equivalent of shouting “FIRE!” in a crowded theater.
That’s not good for political science or the field of elections.
Political science has earned its close-up; the focus on data and dispassionate analysis is appealing to readers and commentators alike because it promises a departure from the typical “spin wars” of election reporting. But if political scientists aren’t careful – either in monitoring their own or their colleagues’ research and publishing decisions – the interest in political science-driven stories will wane. Or worse, it could become yet another (albeit more numerate) weapon in the ongoing rhetorical wars between the parties.
It will also make it harder for researchers and election officials to “play nice” with one another on projects of mutual interest – which for me would be the unkindest cut of all.
I hope that doesn’t happen – and that’s why I’ll be interested to see how these two controversies shake out in the days and weeks to come. Stay tuned.