Showing posts with label political polling. Show all posts
Showing posts with label political polling. Show all posts

Tuesday, June 08, 2010

Avoid Getting caught Under the Dome

"I would say that we may have underestimated the anti-incumbent mood."

-- U.S. Senator Blanche Lincoln (D-Arkansas), cnn.com, June 8, 2010

After reading this, we were wondering where the Senator has been living for the last year – in a cave?

A strong anti-incumbent, anti-establishment and anti-Washington sentiment has been brewing for over a year, manifesting in real political movements and some partisan driven groups.

The Senator's comments reflect a common affliction with many office holders, they develop an "under the dome" mentality, which is when their sphere of influence and perceptions are based on what a small circle of capitol insiders say and feel. Incumbents lose touch with the real needs and concerns of the people they represent. This mentality drives incumbents to feel everything they do in the Capitol is important to the voters back home while ignoring the local (small) things they must do back in their community.

Elections are won by candidates who remain in touch with the core needs and values of the people they represent and recognize that serving in a governing body is an honor and not a right bestowed upon them. They must recognize their job is to advocate for the people they represent, not a small group of capitol building advocates. Elected officials must advocate the issues and values in the legislature and beyond that connect with the voters in their communities, states, Districts. Incumbents should always be actively engaged in their communities when not required to be in session or the Capitol – this will prevent disconnections exemplified by Sen. Lincoln’s comment. Unfortunately, too many of our elected officials retreat to the safety cocoon of the Capitol and lose touch with the realities of what happens in their communities.

This is why we advise and push our incumbent clients to remain fully engaged in their communities throughout their term and to not focus exclusively on their work in the Capitol. This engagement includes aggressive earned and paid communications before the traditional campaign season, significant personal activity in their community among the voters – walking, Town Hall meetings, small groups / coffee meetings, as well as simply doing their job well – holding regular publicized offices hours, returning phone calls, taking care of problems with constituents no matter how big or small, etc. This also includes periodic polling to maintain a real pulse on the constituents they represent, identify opportunities and challenges and devise political and policy strategies to address these challenges.

Incumbents who remain engaged in their community will know the needs and concerns in their community that must be addressed and catch changes in attitudes. They will not have to climb out from their safety cocoon and be awaken by an anti-incumbent election.

Friday, January 22, 2010

The Political Environment in 2010

Coakley's loss in the presumed one of the safest of US Senate seats for Democrats came as a major upset, but not a surprise based on data in the closing weeks of the campaign. It is apparent that Coakley was not a good candidate and Brown was very good. Tactical and strategic mistakes were made by Coakley’s campaign, and those mistakes have been and continue to be discussed in depth elsewhere.

This loss sparked a couple of questions, one of which, is this the result of campaign failures or an omen? We're already seeing the media sound the siren song about the upcoming "Republican Wave," and the talking-heads recounting 1994's losses, but the polling data suggest another, less sensational story.

The polling data tells us that there were two serious problems contributing to the Coakley loss. Elections can never be considered a formality and the political environment can't be ignored. We'll leave the operational criticism aside, hoping for agreement that there is no substitute for a hard working candidate and a finely tuned campaign strategy implemented with efficiency and discipline. The political environmental impact on this campaign must be examined in-depth by all Democrats so as to understand what lies ahead in 2010. As outside observers without direct access to any internal data, we can only make more general observations and conclusions based on our experience and recent work in similar situations on the East Coast.

The importance of polling and pollsters to conduct in-depth examinations of the political environment, with an emphasis on voter intensity / engagement is as valuable and vital to a campaign as more "headline grabbing" information like where a candidate stands in a trial-heat scenario or the persuasive strength of his or her messages. A thorough evaluation of the political environment within which a campaign must operate should identify opportunities and expose underlying problems that have not yet manifested. Third-party polling data suggests that there was a serious motivational problem among base Democratic voters in Massachusetts. These motivational problems were obviously not overcome.

Were the motivational problems due to displeasure (disgust?) with the progress (or lack of progress) arriving out of DC? Probably, but weren’t these issues exposed in the polling? Regular polling was not conducted during this well-funded senatorial campaign, which is a perplexing revelation to us. The campaign did not have the ability (or felt it was unnecessary) to check their progress and determine if the motivational issues were being reversed (that is, if it was exposed in the initial benchmark survey). The campaign also did not have data that would have told them if their message was resonating or if events outside of their control (like the healthcare bill) were having an impact among the voters.

The root problem appears to be that corrective action was not taken to motivate the base. A campaign was not run that was designed to win in a very unfavorable climate, despite the natural partisan advantages in the State.

While some will point to losses in New Jersey and Virginia as the foundation of a "wave" that helped to wipe out the Massachusetts Senate race, there were several local-level campaigns between Virginia and Massachusetts (with far less funding and ability to communicate to voters) that were able to weather the storm.

Our polling caught early signs of motivational problems in several New York elections. Our NY clients were able to address these problems during the front end of their campaigns and were able to drive turnout. Losses are inevitable and, even if a national political wave is turning against Democrats, campaigns that track environmental changes and work diligently to prepare for changing dynamics will be far more likely to weather the worst of political storms.

Tuesday, June 23, 2009

Philosophical vs Substantive Polling Questions

There’s often a disconnect between what public opinion survey results suggest and the public’s true position on an issue. To understand why survey results sometime imply one outcome and reality reveals another, the consumers of public opinion data need to learn the difference and identify philosophical position and substantive survey questions.

Philosophical survey questions, as philosophical questions in general, are not designed to arrive at a definitive answer about a specific issue. While most people could identify fundamental philosophical questions (What is the meaning of life? What is good/evil? What is beauty?), survey questions designed to evaluate philosophical positions are less easily distinguished.

Philosophical survey questions often ask respondents to evaluate the priority they place on various concepts or ideas, or to indicate whether they support or oppose a plan or idea in general terms. An example of philosophical position survey question may include a priority question about whether respondents feel adding air conditioning to existing schools without air condition is a high, medium, low, or not a priority. Another example could be when respondents are asked if they agree or disagree with the statement that the government should guarantee health insurance for all Americans. These questions do not discuss a specific plan, but rather probe how respondents generally feel about these topics.

Understanding the philosophical position of the voters is critical for developing a campaign’s strategy. If it is known, following our example, that voters are generally or highly supportive of adding air conditioning to schools where it currently does not exist, then the campaign for adding air conditioning has a starting point from where to understand their position. Supporters of adding air conditioning know that they are operating in an ‘environment’ that is ‘friendly’ to their cause. Supporters can focus their campaign on issues that will retain support, instead of trying to shift opinions in favor of adding air conditioning.

Philosophical opinions are often the poll results to which campaigns try to direct the public’s attention. They are the results about which campaigns generally want their issue debated, they focus on the big idea. Substantive questions, however, often reveal a schism in respondent opinion, and are less likely to be reported or the focus of a media release. Substantive questions, following our example, would ask survey responds if they supported or opposed adding air conditioning to schools in their district where it currently does not exist if they knew the proposal would require a bond costing thirteen million dollars. A substantive question injects the facts missing from a philosophical question and allows voters to make a more informed decision about their position on the issue, concept, or idea.

There are varying degrees of substantive questions. Respondents could be asking their opinions about an issue with only being presented the basic facts (Adding X number of air conditioners at Y cost). This type of question is often called an ‘initial ask’ or an ‘uninformed ask.’ It is generally asked in neutral language and, if the issue is about something likely to appear on a ballot, worded as closely as to the questions language on the ballot as possible. This sort of question gives a clear indication if the philosophical opinions of the respondents will differ with the reality.

Substantive questions may also be included to try to simulate the dynamics of an engaged debate about an issue (along with additional message testing techniques). Questions like these are typically called ‘informed asks.’ In an informed ask, balanced messages in support and opposition of an issue are cited after a statement of facts about the issue (Adding X number of air conditioners at Y cost, supporters say… opponents say…). Respondents are then asked, based on the information presented in the question, if they support or oppose the issue.

Through examining the differences between philosophical opinions and what respondents indicate through substantive questions is where strategy is developed. Campaigns need to know where the voters stand philosophically on their issue, idea, or concept and where support (or opposition) is lost (or gained) once additional information and messages are presented about the topic. Polling offers campaigns the ability to target demographic and attitudinal groups, discover where and among which respondents changes in opinions occur, and what needs to be done to keep or convert supporters to their cause.

To genuinely understand public opinion on an issue, more than the philosophical data needs to be presented. We have observed the philosophical fallacy in the current deluge of polling results presented in the media about healthcare reform. While the polls show wide appeal for change and reforming the system, specific plans, substantive questions, show a less consensus about how to fix the system. Consumers of healthcare survey data should be careful to seek out the results for substantive questions and not be clouded by reports based on the philosophical support of a plan.

Thursday, April 02, 2009

Congratulations Mark Lesko


Fako & Associates, Inc. would like to congratulate Supervisor-elect Mark Lesko in his overwhelming victory in the election for Brookhaven, New York Town Supervisor on March 31, 2009.

F&A, Inc. worked with an incredibly talented group of consultants, campaign staff, and volunteers to develop the winning strategy that guided Lesko in securing 55% percent of the vote over Republican Timothy Mazzei. Lesko put together an outstanding field campaign team and an unrivaled team of consultants whose hard work helped to elect a strong fiscal reformer who will fight corrupt practices and save taxpayers' hard earned money.

Our strategy in Brookhaven was a continuation of our work in New York State over the past several cycles, helping Democrats regain office in tough, suburban and rural parts of the State. While we have experienced several victories in urban areas, it is our work in Upstate , the North County, and the Adirondacks that helped the New York Senate to regain majority control in Albany for the first time in forty years.

Friday, September 05, 2008

Exploring Ideology and Partisanship in Political Polling

The public, campaign professionals, news media organizations and interest groups are being inundated with polling results now that we are in the final stages of the 2008 election cycle. We are already witnessing reports about who is winning, losing, and why a candidate is stronger or weaker than the other as campaigns position for earned media and fund raising dollars.

Party Identification (ID) is one of those demographic sub-data points receiving a lot of attention this cycle. Party identification may be reported based on a poll respondent’s declared or partisan registration status in states where this data is available and applicable or, more commonly, based on a “self-identified partisanship” demographic question in polls. This data is very useful for both public review and internal strategic analysis.

We see many reports that Democrats have a certain percentage advantage compared to four years ago, etc. This polling data is catching legitimate changes in voter attitudes. However, self-identified partisanship is a moving target. An individual who identifies as a Democrat today may have said Independent a year ago and Republican four years ago. This is a natural adjustment as people’s attitudes and perceptions of the political parties change over time.

The current partisan shift and trends are fueled by negative perceptions of the GOP brand, which causes fewer people to admit they are affiliated with / support the GOP, even if their underlying beliefs are more in line with Republicans. Obama’s appeal to elements of the electorate who previously didn’t participate (younger voters) and those who are shifting their affiliations based on their attraction to Obama, also fuel the changing partisan identification.

Self-identified partisanship shouldn’t be relied upon as an indicator of voting behavior in isolation. Due to its fluidity, we strongly recommend all polling analysis and public released polls include an evaluation based also on self-described ideology. This sub-group is frequently arrived at by a question such as the following (or similar iterations):

  • How would you describe your own political beliefs -- very liberal, somewhat liberal, moderate, somewhat conservative or very conservative?

Ideology tends to be a more stable, underlying factor that drives an individual’s voting behavior. It is an excellent variable to evaluate in conjunction with and context of changing partisan affiliations. It can verify strong partisan shifts in favor or against a candidate. It allows for a more realistic, accurate assessment of the political environment if the underlying ideological make-up of the surveyed electorate is out of alignment with a shifting partisan composition.

For example: If a Congressional District shows a 10 point increase in Democratic identification from two years ago, but the District remains a moderate to conservative leaning area, then the surface movement among partisans may be tempered by the voters’ underlying ideological leanings. This situation would show favorable trends for a Democrat, but not a fundamental change in the voters’ likely behaviors, although the partisan shift will have an impact on the immediate election.

Alternatively, in an area that has a 10 point Democratic gain from the last cycle and shows a moderating, more liberal electorate than previous elections, is catching a fundamental change in the electoral make-up and an indication of a long term, more stable movement in the Democrats favor.

The two scenarios where Democrats receive a similar gain in self-identified partisanship will require a different tone of message because of the underlying ideological composition of the District. A campaign that fails to realize the ideological difference between a liberal Democrat and a moderate to conservative Democrat or a center-right, Independent leaning area where self identified Democratic affiliation is increasing can be disastrously off message.

In conclusion, opinions of partisan sub-groups should not be reported or analyzed in isolation of the voters’ ideological position. Publicly released polls should include sub-group analysis by partisan and ideological breakouts and all reporting on polls should include similar evaluations. We also recommend campaigns utilize ideological data in their internal strategic analysis, which will lead to better, more accurate strategic recommendations and decisions.

Wednesday, February 27, 2008

Informed Polling and Getting it Right

Every pollster will tell you that political polls are a snapshot in time and at best consecutive polls can elicit trends, but should not be used to predict turnout or the outcome of an election.

While the national attention is on how the pollsters got it wrong in several contests with surveys taken only days before an election, few are talking about how early surveying often-times gets things right. We've occasionally written about the importance of informed trial heats in some of our past posts.

To refresh, an informed trial heat is designed to simulate the effects of an engaged campaign, presenting balanced positive and negative messages about each candidate (or multiple candidates). The end result of the election scenario tells a campaign what is possible with their messages and themes they plan to implement. Most importantly, this section will determine if the core message works in direct contrast to the opponent(s)' message and can identify movement among the various demographic and attitudinal groups -- helping refine strategy.

F&A recently conducted a benchmark survey for a client running in a Democratic primary for open seat in a multi-candidate open-seat race. The benchmark poll was designed to evaluate the political environment, determine voter’s top issue concerns, examine opinions of the candidates and other significant figures, and test messages in support and in opposition of various candidates. As we always do in comprehensive benchmark polls, we included an informed trial heat question in the survey. The client was on a tight budget and didn't want to include "minor" candidates in the informed trial heat. After some debate, we were able to convince the client to include the "minor" candidates in the question.

We accounted for the ability of the "minor" candidates to get their message out given their budget constraints. Their messages were curtailed in the informed scenario to a simple bio-statement, while "major" candidates received bio, supporting, and opposing information, simulating an engaged campaign.

In our poll, the results of the informed trial heat were unexpected; a "minor" candidate took a 22 percentage point lead above the assumed frontrunner in the informed scenario, an increase of 25% above the candidate's level of support in the initial trial heat (the uninformed horse race question). A "major" candidate jumped up 7% and the assumed frontrunner stalled with a gain within the margin of error. Undecided voters in the initial trial heat heavily sided with one of the "minor" candidates. The percentage of undecided voters was reduced by over 40% in the informed trial heat. At this point we recognized the minor candidate's growth potential and advised the client to pay close attention to this so-called "minor" candidate. We noted that this individual clearly had the basic background and simple message that would break through the clutter of a highly engaged multi-candidate race, despite an initial perception of not being viable.

As the campaign progressed, the so called minor candidate ended up raising some serious money and gained significant earned media attention in addition to their own paid activities. It became apparent that this minor candidate was not minor, something our polling has observed only three months before the election

The "minor" candidate ended up winning this election, slightly ahead of the "major" candidate that we also observed gaining traction through the informed trial heat. This highlighted the usefulness and importance of utilizing informed trial heat questions in polls and why clients should never ignore perceived "minor" opponents. Polls that include an informed trial heat are one of the most useful strategic planning tools available to a campaign. It gives campaigns the information needed to determine if their message works (in the above example, out client’s message was not working); provides detailed strategic planning information, particularly at the demographic sub-group level, and gives campaigns information to prepare and adjust strategy for unanticipated situation (such as an unexpectedly strong opponent).

Congratulations Aubertine Campaign


Fako & Associates, Inc. congratulates Darrel Aubertine and the Aubertine campaign team for running a successful campaign that resulted in a 52.4% to 47.6% victory in New York's 48th Senate District. While some refer to the results as shocking, our strategic polling showed that victory was achievable, even with a staggering partisanship deficit of thirty thousand more Republican registrants. Congratulations goes out to the Aubertine campaign for their hard work and sticking to the strategy that lead to victory, no matter how badly the odds were stacked against them.


Tuesday, February 05, 2008

Open Ended & Listed Issue Concerns

Our polling and several national polls by other organizations have shown that the Iraq War is no longer the highest issue concern among likely voters. While still a major issue that could quickly rise to the top again given sufficient media attention, the War is being trumped by financial concerns about the stalling economy, foreclosures, raising taxes, and health care and prescription medicine.
"The economy, stupid"
Different Bush, different Clinton, same message... new Obama?

Is it really 1992 all over again?
"It did take a Clinton to clean (up) after the first Bush, and I think it might take a second one to clean up after the second Bush..." -- Hillary Clinton, January 31, 2008
Some seem to think so.

Carville's famed sign on the wall of the Little Rock office in 1992 also included two other important phrases:
1. "Change vs. more of the same"

2. "Don't forget health care."
While the relevance of 1992 is up for debate, we're finding that health care is becoming ever-more defined as an economic concern, and undoubtedly, an underlying component of the current feeling of economic uncertainty. According to a 2005 Harvard University study*, 68 percent of those who filed for bankruptcy in the US had health insurance. In addition, the study found that 50 percent of all bankruptcy filings were partly the result of medical expenses. Indeed, don't forget health care.

Simply talking off a bullet point isn't enough to relate to the voters. Candidates have to speak directly to the concerns of the electorate. While all challenger campaigns are inherently running on a message of "change," even if that message is never directly communicated, the nuances of the message behind their version of "change" will determine if they connect with the voters. As we've seen in the recent debates, none of the candidates were shying away from the word "Change..."


Undoubtedly the word "change" tested well.

Pollsters sometimes use listed issue concern questions instead of open-ended issue concern questions to save time (and money) in a survey. We find that listed issue questions often miss the way how issues are being discussed. While F&A, Inc. hasn't recently conducted a nation-wide survey that included a top issue concern question, some of our recent surveys in several mid-west and east coast state legislative campaigns included open-ended verbatim response questions about the voters' top issue concerns.

For example, in one of our recent surveys, a respondent offered the following when asked about his or her most important issue concern:
"I WOULD SAY HEALTH CARE SHOULD BE MORE AFFORDABLE TO PEOPLE AND THAT DRUG PRESCRIPTIONS PRICES SHOULD BE LOWERED. I THINK THAT EVERYBODY SHOULD HAVE AFFORDABLE INSURANCE"
This response was typical of all responses that related to health care. A campaign in this district that focuses it's health care message on concerns other than making health care affordable and lowering the cost of prescription drugs will not be connecting with the concerns of the voters in this particular district.

1992 or not, candidates who speak directly to the concerns of the voters in a way that addresses their concerns will fair far better than a candidate who elaborates off a bullet point without qualified direction.

Early benchmark surveys should be as comprehensive as possible and include open-ended issue concerns whenever possible and appropriate for a campaign's budget. These types of questions help drill deeper into the how and why a voter thinks and cares about a particular issue and provides better strategic direction on how a candidate can address the issue.

In a presidential race, 1992 or 2008, there simply is no excuse for not having the message right. Regardless of the level of campaign, we always say it is better to have the message designed right the first time, than to spend the rest of the campaign correcting it.



* Himmelstein, D, E. Warren, D. Thorne, and S. Woolhander, "Illness and Injury as Contributors to Bankruptcy, " Health Affairs Web Exclusive W5-63, 02 February , 2005.

Wednesday, December 05, 2007

Be prepared for the "Re-Elect" reports and beware of the numbers

Around this time of year you'll start to notice newspapers reporting "re-elect" numbers in their headlines and press releases and fundraising memos from candidates stating Congressman XX has a re-elect number of only XX% (Always way below 50%).

They will arrive from a few different forms of questions...

Thinking about the 'upcoming' election for U.S. Congress, do you think you will vote to reelect (NAME OF CONGRESSMAN), will you consider voting for someone else, or do you think you will vote to elect someone else?

Do you think most of the Democrats in Congress deserve to be reelected, or not?

Do you think (NAME OF OFFICIAL), the Representative in Congress from your district has performed his or her job well enough to deserve reelection, or do you think it's time to give a new person a chance?

These "re-elect" question may take other forms as well.

Beware of these numbers when reported on their own, without other supporting information.

"Re-elect" questions, in our experience, usually reflect suppressed levels of support for candidates and don’t show a true status of an incumbent's re-election standing. For example, in 2006 we polled in a Midwestern congressional district for a prospective challenger. The incumbent had a very low (26%) re-elect number, but that same official had nearly a 50% positive job approval rating and a personal favorability rating that was twenty points higher than the re-elect assessment. We've seen similar discrepancies between re-elect questions and other incumbent assessment items in our surveys and other polls throughout the years.

The "Re-Elect" question, in its various forms, should never be interpreted on its own as the tell-tale sign of an incumbent's prospects. It should only be factored in the evaluation when it is accompanied by related questions whose data also support its conclusion.

Accompanying indicators should include such measurements as job approval ratings, personal favorability ratings, and trial heat numbers. Job performance is the best indicator of whether a politician is meeting the expectations of his or her electorate. Job performance is almost always the best indicator of an incumbent's current re-election chances. Personal favorability ratings will reveal the depth and which direction voter sentiment leans towards an incumbent. The personal favorability rating identifies if an incumbent's ratings are driven by soft -- passive name ID, simple partisanship, or if there is real, personal like / dislike and, intense and deep favorable / unfavorable sentiment behind the official's ratings. The trial heat will place the candidate in a ballot simulation with another candidate(s), complicating the election by injecting numerous factors, including party labels and, not the least of which being the level of recognition and personal favorability voters feel towards the challenger(s). These factors, taken in the whole, provide a comprehensive review of an incumbent’s prospects.

Re-elect questions can be useful when interpreted and reported in conjunction with these other related factors, but should never be considered a strong, accurate read on an incumbent if reported independent of other variables.

So, we advise anybody who reads a news story, gets a press release, or sees a fundraising memo proclaiming an incumbent is in dire straits because of some low re-elect number to ask for other supporting data to verify such a conclusion.

[Note: For the reasons stated above and to ensure credibility and accuracy in our surveys, Fako & Associates rarely asks re-elect questions in our polls. When clients request the use of the question, we will only use and report it in conjunction with other questions such as personal favorability and job performance ratings.]

Thursday, October 18, 2007

Components of Political Polling: The Benchmark Analysis

Campaigns need a detailed analysis that highlights the key findings of a survey and interprets the findings into usable strategic recommendations. A good analysis always goes significantly more in-depth than the "slide show" presentations that have become popular in recent years. A quality analysis provides direction, focus and discipline to a campaign.

Continuing our series on understanding political polling, we've previously discussed the components of crafting a survey and how to read crosstabs. Now, we will discuss the components that should be present in an in-depth strategic analysis of a typical comprehensive benchmark survey.

An analysis is essentially a detailed strategic playbook for a campaign in three parts. A good analysis will be one part reporting key findings, one part analysis and strategic interpretation of those findings, and another part interrelating these findings and interpretations into cohesive strategic advice.

The analysis should report the significant findings of the survey and the pollster's strategic interpretation of the data. As the report progresses, a strategy is developed as all the pieces of the puzzle come together, with a final conclusion and a detailed understanding of the candidate’s situation, dynamics of the election and a clear (or best available) path to victory.

All surveys should start with background information on the survey and other pertinent information. Sometimes campaigns will overlook this information, but it is critical so we'll take some additional time to address the importance of the 'technical stuff.'


Survey Methodology

Analysis reports for all surveys should contain a statement on methodology. The methodology statement should contain several fundamental facts about how the survey was conducted. At a minimum, the sample size, margin of error, the days the survey was conducted, if screening and/or weighting techniques were used, and how the survey was conducted should be present. It also should identify the polling firm and who commissioned the survey.

While many polling organizations are hesitant to reveal their specific scientific sample designs on reports for public release, the methodology statement should contain some insight on which sampling methods were use and their intended purpose. This gets into how a survey was conducted. This post isn't intended to discuss methodology statements, a topic we may cover another day.

Methodology statements are important to campaigns because they help address concerns over the quality and accuracy of collected research data. Unless the methodology statement ensures the statistical validity of inferences about the studied population, a survey won't earn credibility with potential donors, supporters, campaign staff or the media and should not be trusted to guide your campaign.


Survey Information / Glossaries

Most analysis reports will contain references to various demographic and attitudinal groups using terminology that may need defining because pollsters use their own terminology. Basic terms may be as simple as referring to regional classifications (ex. "This specific group of precincts represents this neighborhood..."). Slightly more complex terms may reference overall supporters and opponents or further classifications such as Weak/Strong Supporters/Opponents. The glossary should define how these groupings are arrived. Depending on the complexity of the analysis, a glossary may be defining combinations of attitudinal groups.

It also is important for surveys to identify the demographic composition of the survey, which to a degree may be accomplished in the glossary. This is important to evaluate if the survey is representative of known demographics or if it is identifying a demographic shift that older data does not reflect.


The Political Environment

The following sections deal with something we refer to as the political environment. Campaigns must accommodate for several factors not within the control (or limited control) of the campaign, but have significant impact on how the campaign must operate.


Voter Intensity

Election turnout is not predicted by political polling, but the voters' interest (intensity) in an election and trends in voting patterns are exposed through survey data. One of the goals of a benchmark survey is to determine the environment in which a campaign must operate. Benchmark surveys typically ask voters (often through a series of questions) how interested they are in participating in an election.

It's vital to know if your campaign will have to overcome a deficit of interest to energize your base or potential supporters. Gauging voter intensity among your opponents’ supporters helps to reveal the level of activity and interest in your opponent. Voter intensity can also reveal if certain attitudinal groups are likely to express their feelings through the ballot box (ex. Conservative Men list "School Taxes" as their number one issue concern and have a significantly higher level of interest in the election than the population as a whole).

This small piece of survey data is critical and can provide a lot of information related to the overall interpretation of the poll.


Partisanship & Ideology

Through demographic questions, a benchmark level survey should be able to determine variations of self-identified partisanship across the various regions, demographic and attitudinal groups in your survey. It's helpful to know where your partisan base of support is, which areas are ideologically aligned with your positions, if voters are independently minded in terms of voting, if voters revert to partisan voting with lesser known candidates, etc.

Is your opponent dependent on his or her partisan label or is there cross-over support? Is promoting your partisan label a good thing in this district or does it preclude the voters from even listening to your message? What is the partisan and ideological makeup of undecided voters and other key target groups? These questions and many others can be addressed through testing partisanship and ideology and analyzing how these factors will impact the election.


Mood of the Voters

The mood of the voters section of an analysis lends the campaign insight into how pessimistic or content the voters are feeling. The mood of the voters can be arrived at through several different types of survey questions, including the common right track / wrong track questions. The mood of the voters is a significant because, depending on their mood, the voters could be primed for messages of change, less responsive to such messages, more accepting of an incumbent's messages of accomplishment, or more likely to reject messages of accomplishment.

Through cross tabulation, your pollster should be able to tell the campaign what is the driving factor behind the mood of the voters. Sometimes the mood of the voters has little to do with how they feel about a particular elected official.

A benchmark survey also helps a campaign understand what the voters are concerned about and how the voters feel about known significant or controversial issues. Knowing what the voters are concerned about in general helps the campaign refine its communications to speak to the concerns of the voters. Having an understanding of the voters' opinions on complex issues (like immigration, abortion, same-sex marriage) can save a campaign from making disastrous decisions such as shifting the focus of a campaign’s message in a way that clashes with the voters or seeking a third party issue endorsements that alienates the candidate from the voters.


Issue Concerns

Our surveys usually include a volunteered (open-ended) most important issue concerns section, although many pollsters use a listed version of this question (which we feel provides less useful information). These questions typically reveal the macro problems that the campaign already has an awareness of, to a degree. At the micro level, the most important issue concern can expose undercurrent and developing issues and allow for micro-targeting of particular demographics which significantly express concern over a certain problem more than other voters.

It is particularly important for the issue section of the analysis to relate the voters' concerns with the messages that were tested in the survey because messages that test strong will be inherently stronger if they relate to a key concern of voters. Knowing the most important issue concerns and on which voters they have the greatest affect and how they interrelate with a candidates agenda (message) will help the campaign target persuasive messages and develop a communications plan that ties into key concerns and the candidates strongest points.


Opinions of Significant Issues

Occasionally there are significant and unique developments and issues that will have a direct effect on the election. These can include national, state, local and sometimes campaign specific issues. Pollsters do not tell candidates what their positions should be on the issues, but they can provide guidance on how to deal with the issue and how that concern may factor into the campaign.

It’s important to know what voters think about key issues in order to determine if your position is helpful or a hindrance to your campaign. Your poll can determine which demographics are likely to be "single-issue" voters though the use of issue questions and follow-up questions on voting behavior.

Reproductive freedom and stem cell research are examples of two hot button issues where a campaign needs to know if their position would make better sense as a broadcast or targeted message.

Benchmark level surveys also evaluate opinions of the candidates and other public individuals or organizations that could have an impact on the campaign. Knowing the voters' opinions of a candidate and his/her opponent lets a campaign know whether they have to focus time and money repairing image problems, defining themselves to the voters, if they can define their opponent or if the opponent is too well known, etc. Political surveys also test support for candidates in various configurations of trial heat simulations and test the campaign’s messages for persuasive value.


Opinions of Candidates and Other Public Figures

A political survey is more than just finding out who is winning and who is best known. Benchmark level surveys allow for an in-depth review of the candidates. Personal favorability and job performance ratings where an incumbent is involved are typically included in benchmark level surveys.

These items give an overall picture of where a candidate stands in the minds' of the voters. More in-depth surveys will offer a series of open-ended questions asking specifically about positive or negative aspects of either candidate. This type of questioning can prove invaluable in tight races, especially when one of the candidates is a longer-term incumbent or well known public figure.

Discovering the voter's opinion of the candidates will allow the campaign to know if a candidate has deep rooted appeal, passive support, if the voters think he/she is doing a good job and, through open ended questions, can discover nuances in opinion that can expose potential vulnerabilities or strengths.

Beyond the candidates, a campaign needs evaluate opinions of other significant public figures (or organizations). This data helps the campaign decide if it is worth expending resources to promote an endorsement, or which demographics to target information on a specific endorsement and if an opponent's public supporters can impact voting behavior.


Trial Heats and Informed Trial Heats

The horse race question is the standard-bearer for political surveys. A simple trial heat can tell you which candidate is currently leading if the election where held today and, more importantly, it can tell a campaign where and among whom each candidate is performing better and worse. The analysis of this component of the survey should detail where each candidate garners their support, how deep and strong their support is and identify each candidates' areas of weakness.

An informed trial heat is designed to simulate the effects of an engaged campaign, presenting balanced positive and negative messages about each candidate. The end result of the election scenario tells a campaign what is possible with their messages and themes they plan to implement. Most importantly, this section will determine if the core message works in direct contrast to the opponent(s)' message and can identify movement among the various demographic and attitudinal groups -- helping refine strategy.

The interpretation of this part of a survey should include detailed demographic profiles of various targets groups such as undecided voters, strong and weak supporters of the candidates, persuadable voters and any other group the pollster has identified as vital to the campaign. This section of the analysis also should highlight the key objectives of the campaign and the targets to meet those objectives. For example, the poll may identify that your Democratic candidate has significantly lower name ID and support among Democratic women than their male counterparts, so a key early base building objective of the campaign would be to raise name ID and favorable image of your candidate among this group.


Messages

It is better for the campaign to have its messages right the first time, than to spend the rest of the campaign correcting it.

A benchmark survey tests the persuasive value of a campaign's set of messages. There are various ways to accomplish this -- through contrasting statements, statement choice tests, informed ballot test and through tests of isolated issues / messages / language and other means. Benchmark level surveys typically test batteries of positive and negative messages on all candidates.

Through the analysis process, it is possible to determine which messages hold the most overall strength and, just as importantly, among which subgroups and key target groups each message is most persuasive.

With an understanding of the top issue concerns in the district, campaigns can utilize message testing to speak directly to targeted sets of voters on issues that they are most concerned. Testing messages isn’t about finding out what to say; so much as it is to find out if what the campaign plans to say is worth saying and who best to say it to. Message testing is used by campaigns to narrow down their messages to those that are both relevant and most convincing.

Campaigns can also test the value of narratives and themes the campaign is considering using. A well fitted narrative and a comprehensive theme can tie together the various policy priorities and beliefs of the candidate into a cohesive message that works for its political environment.

The analysis of the poll should: recommend the overall thematic connections for the message; suggest language that emphasizes this narrative; synthesize the message; identify what components and language are the best for the campaign to utilize; and, identify the strongest points of contrast with your opponent(s).


Analysis Conclusions and Message / Target Group Demographic Profile

We also feel that all analysis should have a conclusion that ties all the pieces together into a simple, easy to use format. This conclusion clearly needs to show the political environment, dynamics of the election, key target groups, the demographic profile and the messages that work best with these groups. We like to use a simple target and message profile chart -- basically a "reminder sheet" for the campaign team.

This summary will synthesize the more detailed and complex components of the analysis into a simple to use actionable document. It will highlight the campaign's key objectives, targets and message. This conclusion serves as a constant reminder for the strategic team of where the campaign needs focus.

Tuesday, September 11, 2007

Two common mistakes that cost local campaigns their elections.

Campaigns outside of major population centers, lower budget and local campaigns often rule out conducting strategic opinion surveys for two reasons: they think they can't afford a strategic survey and the candidate and/or staff believes they know the city or district where they are asking for votes. These mistakes set up local and lower budget campaigns for failure.

While having a comprehensive survey that allows for an in-depth evaluation of the election dynamics and gauging voters' philosophical outlook would benefit any campaign, that level of information is not necessary to develop a winning strategy for a campaign that will likely be driven by direct voter contact (field campaigning) and direct mail.

A poll, regardless of size and length, should be designed to develop a strategy. As we've discussed previously on this blog, the cost of polling is arrived at primarily though the number of interviews conducted and the length of the survey (in minutes, not necessarily the number of questions). Even a shorter poll can highlight the persuasive winning arguments and expose the strengths and weaknesses of the candidates. A modest poll doesn't mean you are only going to find out who is winning, losing and better known… All polls are about developing and refining strategies that maximize the use of a campaign's finite resources.

Localized races like District Attorney, County Board, and Alderman and Mayor (in smaller cities) can make use of smaller sample sizes (depending on the size of the voting population) and shorter surveys that last 5 to 10 minutes. Surveys of this scope can accurately evaluate the opinions of the candidates and other public figures, test the strength of support for the candidates, identify key targets, and test a limited number of messages for the campaign while not breaking the campaign's budget.

One of the main objectives of a lower budget campaign in seeking polling should be to find the message(s) that have the widest appeal to the largest amount of voters and niche messages that are targetable to subgroups of appropriate scope.

When determining which messages tend to work better with certain subgroups, it may not be possible to determine with high statistical accuracy that a particular message works better with a very specific, micro-targeted subgroup in poll with a small sample size. Larger and more practically targeted subgroups will allow for a statistically reliable reading of which message is most persuasive among broader subgroups and the sub-set will be useful and practical for your direct mail and field campaign budget.

To give an example on targeting for a lower budget campaign, let's assume the campaign's survey reveals that your candidate has weaker support among younger women. Generally speaking, it is more effective for a lower budget campaign to use a more practical, broader direct mail targeting criteria (such as "women under 50" instead of "Women 18-25 in households with children with net income over 100K") and deliver a message that tests well among many voters within a broader subgroup. Distributing your best messages to the largest amount of voters as possible is vital because campaigns at the local level already are dealing with a small numbers of voters, are lower profile, gather less earned media, and the candidates are typically less known and get less attention than other higher profile campaigns.

An experienced polling company should be able to evaluate the messages your campaign wants to test and help you refine them in a way that will maximize the quality of the data your survey will provide. Subgroup analysis to determine message targeting is still possible with smaller sample sizes, even with the inherently higher margin of error that a smaller sample size brings. The bottom line is that a campaign shouldn't spend money for a survey that provides data that the campaign’s budget won't allow it to use.

The other common mistake of lower budget and local campaigns is assuming a level of knowledge about the voters, their concerns, and what messages will work for their campaign. While it is possible to knock on every door in a smaller city or county, it is impossible for a candidate or campaign to gather the type of information that is revealed by a random sample survey of likely voters. Even the briefest of political polls will screen for likely Election Day voters and sample the electorate in proportion to the predicted Election Day turnout. The screening process helps to ensure that the opinions reflected in the survey are those of voters who will likely participate in and ultimately decide the outcome of the election. The response campaigns receive from activists, supporters, and voters at the door do not generally represent or generalize to the opinion of all those who are likely to cast their ballot on Election Day. Polling provides vital strategic data to compliment the anecdotal information that your campaign will receive.

Getting wrapped up in the cost of political polling and the need for full-blown, multiple micro-targeted direct mail campaign often reinforces the belief among candidates and their staff that their campaign doesn't need strategic research to understand their electorate. The reality is that lower budget and local campaigns can rarely afford to go into an election without the research that a basic survey can provide. Good information is the foundation of a successful campaign strategy. Thinking that political polling is too expensive and that they know their electorate well enough cost campaigns their elections.

Tuesday, August 28, 2007

Cross Tabulation Tables

Cross Tabulation tables (often referred to as "cross tabs" or "tabs") numerically display the results of a survey, showing how the answers to one poll question break down according to the answers to another poll question or other data derived from the sample.

Generally speaking, cross tabs are detailed statistical / numeric analysis of a survey, but in a raw, unfinished form. Your pollster will use cross tabs to generate a written analysis that will serve as your campaign's strategic guide and tactical play-book.

It is important to understand what basic components are necessary to generate a good set of cross tabs that will in turn give you the data needed to make strategic and tactical decisions. Although many campaigns have staff that understand how to read tabs, a campaign should not have to probe through cross tabs to get answers from their poll -- that's the job of the pollster.

Unlike a "topline report," which reports the breakdown of answers to a specific poll question in terms of the overall percentage of all respondents, a cross tab shows how the results of questions (such as who is winning in a trial heat) relates to a variety of demographic and attitudinal classifications.

(Figure 1, Click Image to Enlarge)
Technically speaking, a cross tabulation displays the joint distribution of at least two or more variables (e.g.: "Male" & "Candidate 1 Supporters" or "Male less than 50 years old" & "Candidate 1 Supporters") (See Figure 1).

The topline report might show Candidate 1 capturing 24% of the vote and Candidate 2 capturing 50%, with 26% undecided. A cross tab (See Figure 2), however, will show candidate 1 capturing 24% of the vote overall, but also shows candidate 1 capturing 31% among voters in the South Region and performing better among voters over 65 with 32% of their vote.

(Figure 2, Click Image to Enlarge)

Now that you have a basic understanding of how cross tabulations work and are read, it's important to know which demographics and attitudinal groups should be included in your cross tabs.

Every pollster has their preferred way of structuring questions and response items. We've created a very basic set of sample cross tabs with easy to understand response items to help show you some possible combinations withing the tables.

As you saw in Figure 2, demographic questions are among the most essential components of your cross tabs. You need to know how regional differences affect the opinion of voters; as we discussed the importance of a regional break-down in the previous section of the components of a survey. Beyond regions, your cross tabs may or should include, depending on the scope and purpose of your survey, demographic data like Gender, Age, Race, Political Party ID, Political Ideology, length of residence, education levels, income levels, marital status, if the household has school aged children and so on.

Cross tabs allow for more intricate demographic breakdown. As shown in Figure 2 and Figure 3 below, it's possible to combine variables to arrive at combinations of demographics, such as Men under Age 50 and Female African American voters. The combinations are nearly endless, but finite groups tend to statistically break down in reliability as the sample size decreases among the groupings and the margin of error increases. Your survey sample size will largely determine the level of micro-analysis capable within your cross tabs.

Beyond clearly defined demographic groups, your crosstabs should include various special attitudinal groups. The special groups may be defined through various variables based on the respondent's opinions and answers to both demographic and substantive questions in the survey. These groups can represent key targets for your campaign and a demographic picture of these voters can be developed by utilizing these groups in your cross tab analysis, allowing the pollster profile key target groups. Attitudinal groups (See Figure 3 & 4) can help you determine the demographic breakdown of various significant groups such as weak/strong supporters/opponents and undecided voters and variants of them such as undecided non-Democrats and uncommitted women, etc. They can include variables such as voters who support your party's top of the ticket, but do not support your candidate and swing / persuadable voters who in the course of the survey showed movement from undecided or supporting your opponent to supporting your candidate, etc. and numerous other possible attitudinal options. Depending on the construction of your survey, you can also determine who are single issue voters on hot button issues. There is a wide array of attitudinal possibilities, only limited by the scope of your survey.

(Figure 3, Click Image to Enlarge)
(Figure 4, Click Image to Enlarge)

In the end, a well constructed cross tab report will provide the numeric foundation and data for any campaign's strategic plans and are the root "source" for a pollster's written analysis and strategic advice. Pollsters who produce intricately detailed, useful tabs that go way beyond the standard factors will provide campaigns with higher quality strategic advice. Pollsters using this level of detail provide more precise direction on message, targets and strategy that increases the probability of developing a successful plan.


Tuesday, August 07, 2007

Components of Crafting a Survey

Most successful endeavors require prior planning; a successful political survey is no exception. The quality of your survey is dependent upon the value of the information gathered before it is crafted. There are generally seven research components needed to develop a strategic survey. We will briefly outline these criteria, explain their significance in surveying, and simultaneously review the various components that should be present in a good benchmark level survey.

A benchmark survey is a comprehensive evaluation of the election dynamics, candidates and issues. They are used to develop your strategy and message. They serve as your campaign's tactical guide and strategic "play book." Benchmark surveys usually are 15 - 20 minutes in length, or longer. It should be as comprehensive as your budget allows.

Unlike like Stephen Colbert’s interviews in his Better Know a District series, you have to have a basic understanding about the region where you will be polling.

What’s the population? How would you describe the various areas within where you want to poll? What are the characteristic’s of the area’s economy? Does the area have a unique history?

Population and voter registration data help determine the polling sample size needed to efficiently and conduct a statistically accurate survey. All political surveys should have multiple regions that represent something usable to your campaign. The regions that get established for your survey have to be significant for your campaign.

Is the West end of town mostly town homes? Is there a significant minority concentration on one side of town? Are there a handful of distinct townships within the region to be polled? Are these groups of precincts representative of a higher-income area? Are these groups of precincts virtually not walkable by the candidate or volunteers?

Questions like these should be asked when developing regions and the list of demographic classifications that you will find useful to have in your survey (such as any combination of gender, race, income, education, union status, political ideology, party ID, etc). Members of your campaign need to be able to look at your survey analysis and quickly determine what’s happening, among whom, and where.

Knowing population data (and vote share information) will help you determine what percentage of the vote each region represents. The campaign should work with the pollster (if needed) to find the projected turnout for the district and your winning 50% plus 1 number. Knowing where the vote is coming from and the opinion of voters in the various regions can help not only focus your campaign’s message, but also help in the allocation of the campaign’s budget.

Answering preliminary questions about the area you plan on surveying establishes the background to keep in mind throughout the survey process. Knowing a little about the district’s personality serves as a guide through the survey data collection process and makes it easier to spot any potential abnormalities early on.

When crafting a campaign, you also need the key players within the district.

Are there key officials, public figures or private individuals who would have an interest or impact on the campaign? Would an endorsement from a certain official be a help or hindrance? Is it worth spending campaign time and money to promote a particular endorsement? Will campaigning or sharing resources with another candidate hurt or help my campaign?

A proficient benchmark survey will test the personal favorability ratings of people who are of an interest to the campaign. Knowing if someone holds significant substantive recognition ratings and if they are viewed positively or negatively will help determine if valuable campaign resources should be expended in seeking or promoting endorsements and/or affiliations.

If public officials are being tested in the survey, their job performance ratings are usually also tested. When in the position of a challenger, the job performance rating and personal favorability of the incumbent is always tested. Incumbents should test their own favorability and performance ratings to help reveal any underlying potential weaknesses.

Job performance and personal favorability weaknesses oftentimes will reveal weaknesses that aren’t apparent in the benchmark survey's trial heat of candidates or issues. Sometimes voters will lean towards a known and disliked incumbent over an unknown challenger. For example, voters who responded that they are “weak supporters” in a trial heat question filtered into a group that show weak intensity toward a candidate personally or in their job performance rating through cross tabulation reports. It’s possible to narrow down the demographic list of voters that should be targeted by the campaign in this way through a variety of response position combinations.

Classifying groups of voters based on their support and feelings toward candidates or issues is not so useful in itself, but knowing what strategic message to communicate to target groups to achieve a desired effect completes the purpose of the survey.

Detailed background information on the candidates or issues must thoroughly be reviewed, prioritized and incorporated into the survey. This information could contain both helpful and damaging information. Depending on the budget of a campaign, this type of information will contain variations of the following: summaries of public records (tax liens, lawsuits, etc.), news clipping, issue positions, campaign finances, endorsements, other court/judicial records, interviews, tax records, voting records and other vital documents.

This information gathering process is called Opposition Research (aka OR, Oppo) and Vulnerability Research (research about yourself or the campaign issue). They are critical components that must be completed before a survey is crafted. There are several reputable information gathering firms who specialize in generating detailed strategic OR reports. Lower budget campaigns can conduct their own baseline research, but they need to be careful about allowing volunteers gather OR information, especially in candidate campaigns where zealous volunteers may not be diligent on fact-checking.

Opposition research has received a bad name in recent years (Negative attacks, "Swift Boating,"etc.), but OR itself is not inherently harmful to the democratic process. The ethics of OR is a discussion for a different posting, but remembered that the information tested in a survey is always left to the discretion of the pollster and the campaign. A good pollster will lend you his or her experience and guidance on whether to test controversial information and advise your campaign on utility of the information and the repercussions that could occur.

Along with knowing information about opponents, you or your issue, and, details about the area to be polled, it’s also important to understand all major or pertinent issues that may affect the campaign, including local issues. Polling allows you to test voter's opinions on important known issues.

Is a big-box retailer petitioning to move into town? Will reproductive freedom be a wedge issue brought into the campaign by your opponent? Is there a statewide debate brewing about same-sex marriage, or funding stem cell research?

Conducting a political survey about important issues will reveal not only the opinion of voters, but also the intensity they feel towards issues. You’ve probably heard the expression of a "poll driven politician," your campaign doesn’t want that association. Polling should not be used to determine your (or your campaign’s) issue positions, but rather it should help you develop an approach to talking about an issue (or whether to talk about an issue), discover natural allies (demographically, regionally), and, refine your approach in how your campaign communicates its messages about the issues.

Not every issue will be known by your campaign. A good benchmark survey will incorporate an open ended question early on to directly ask the voters what issue or problem they are most concerned about. Often times this question will reveal issues which your campaign is already aware, but open ended verbatim responses also reveal how issues are being discussed and lend insight to why voters feel the way they do. This form of open ended questioning sometimes reveals upcoming issues that haven’t boiled to the surface.

Knowing where voters stand on the issues will help refine your message. A benchmark survey can test the persuasive value of the potential arguments in support of your campaign and the arguments against your opponent(s). Testing messages isn’t so much about finding out what attacks stick against your opponent, but rather discovering which of messages hold the most persuasive value and most correlate to the concerns of the voters. Your campaign needs to speak to the concerns of the voters, testing messages through a benchmark poll keeps a campaign on target by speaking to the issues with messages that resonate with the voters.

This primer on the components of a benchmark survey is meant to familiarize information seekers on the formation process of a political benchmark survey. While in no way would we consider this discussion comprehensive, it has hopefully answered some preliminary questions about planning for a political poll. Benchmark polls should be conducted as early as possible in a campaign. To summarize, a benchmark poll’s purpose is to get the strategy and message correct early in the campaign, rather than spending the rest of the campaign fixing it.

Monday, July 23, 2007

The Smoke-Free Illinois Act is Signed

Click here to see a video report about the signing!

With the signing of Senate Bill 500 into law as the Smoke-Free Illinois Act, Fako & Associates, Inc. would like to congratulate all the organizations that have worked tirelessly to protect public health in Illinois. We especially thank the American Lung Association, who lead the Smoke Free effort around the state. We also would like to thank the American Cancer Society and the American Heart Association for letting us be a part of making Illinois a Smoke Free State.

The signing of SB500 is a strong commitment by the State of Illinois to save hundreds of thousands of lives each year from the deadly affects of Second Hand Smoke. We are proud to have been a part of this effort with our statewide work and our city-specific initiatives in the city of Chicago, Oak Park and in our state’s capital, Springfield, Illinois.

News on SB 500:
Blagojevich is poised to sign smoking ban
Blagojevich signs statewide indoor smoking ban
Blagojevich Signs Law Making Ill. Public Places Smoke-Free
Governor signs smoking ban bill
Governor signs statewide smoking ban


Wednesday, July 18, 2007

Cell Phones & Political Polling

As we enter the presidential primary polling season articles are already surfacing questioning if cell-phone only users are distorting polling data. The common response to these allegations is that cell-phone only households account for 12.8% of US households, and their responses are not likely to be much different from their associated demographic group. Initial research has shown cell-phone only voters not to be married, younger, more mobile, less affluent, more likely to be a minority, and more liberal. Pollsters have summarized up to now that the cell-phone only demographic are those less likely to vote and likely are not all that different from other voters in their (18-34) group. We've consistently heard that cell-phone only voters have a minimal impact on results (one to two percentage points) and fall within the margin of error on most standard surveys.

Younger voters, the go-to demographic to describe cellular-only voters, do indeed vote and are a significant part of the electorate (We've explained this before on this blog.) Recent research from the Pew Research Center has dispelled another myth about cell-phone only voters. They are not the same as their land-line counterparts. The Pew Research Center conducted a dual frame survey between Land Line and Cell-Only voters. Across the survey's 46 different questions, there was an average of 7.8% difference between the two groups, with a range of differences from 0% to 29%. The Pew Research Center estimated that the maximum change in the final survey with cell-phone responses added would only account for 2%. The mean change accounted for less than a percent (0.7%). Those results are within the margin or error for most surveys.

While today Cell-phone only voters are concentrated among the demographic groups outlined above, there is evidence to suggest that the proportion of cell-phone only households is only likely to increase and by a result of growth in “wireless” services and aging demographics, the profile of a cell-phone only voter is evolving. Wireless substitution will grow exponentially as the cost of services continues to decrease and their quality and convenience increases. Wireless substitution is also complicated by the ever-evolving technology available. For example, T-Mobile recently launched a Wi-Fi and Mobile Calling integrated phone (dual mode GSM/WiFi) that allows their cellular phone to become a “landline” phone while in range of their wireless router. Other service providers are currently developing similar services.

Some pollsters suggest weighting samples to account for the uncovered population (such as weighting up certain demographics in a college town based on known demographics and voting behavior). While this tactic in general may work in larger, national surveys, a political survey for a state legislative district that captures less than 5% of the 18-34 demographic cannot be reliably weighted within that age group for several reasons. Weighting may serve well as a stop-gap attempt to keep polling data accurate if we believe that as more Americans become cellular-only voters that their opinions will normalize to the population as a whole.

Other “soft sciences” have explored the “digital divide” and its effect on communities. As communication technologies change and integrate into internet hybrid services, such as VOIP (voice over internet protocol) and WiFi based UMA (Unlicensed Mobile Access), voters will continue to be further segmented along various demographic lines associated with access to digital communication technology. We do not believe there will be a dramatic "normalizing" effect between the various groups. No one can be certain yet where demographic lines will be drawn along still emerging technology channels.

Calling cellular phones is complicated for several reasons. Federal law prohibits the use of automated dialing devices when calling cell phones. Calls to cellular phones have to be done by hand, greatly reducing the efficiency of call centers. On the other end of the cost spectrum, cellular users typically have a cost per minute rate plan. Cellular-based voters would incur a cost to participate in a survey, which would likely drive down participation rates. While compensation / reimbursements could be offered for the use of a participant's cellular minutes, this would add additional cost to conduct to surveying. There are also issues of liability. What legal responsibilities does the pollster have if a respondent is driving while participating in a survey and gets into an accident? Cellular phones are also not tied to a geographic location and create difficulties in screening eligible participants. A respondent may live in one area but maintain previous address's area code and phone number. As we've discussed, cell-only voters of today tend to be younger and create some challenges in accurately developing a balanced sample for a political survey.

Moving beyond the problems of contacting cellular voters, some pollsters have increased their reliance on internet polling, such as pollster John Zobgy of Zogby International. While internet polling has its own drawbacks, a conversation for another day, the process of mixed-use polling (to borrow a term from urban planners) seems like the most reliable and cost-effective method of reducing sampling and non-response problems associated with cellular-only households. Mixed-use polling is a blending of surveys such as traditional telephone surveys and accompanying internet polls. The methodology needs to be refined and adapted, but the future of traditional telephone polling is apparent. Adapt or Bust.


Monday, June 04, 2007

IVR Polling

In 1998 and 2000, campaigns began using a technology called IVR (Interactive Voice Response). This technology was primarily used for traditional voter contact and GOTV calls -- and also widely implemented for surreptitious "push polls." It was only a matter of time before this technology would be used for polling.

In 2002, we saw the first widespread use of this technology in polling -- mainly with publicized media polls. By 2004, the use of this technology for publicly released polling became commonplace, with several organizations and media outlets routinely releasing polling data using this technology.

This has created a lot of inquires about this type of polling.

First, it is important to understand the pros and cons of the technology.

IVR is a new and constantly improving technology in the relatively short history of telephone driven public opinion surveying. These IVR surveys typically employ random digit dial (RDD) and use a recorded voice instead of a live interviewer. Their main advantages are their lower production cost, low entry barriers, and the speed of which they can be implemented and which data becomes deliverable. Their disadvantages, however, are numerous and campaigns considering employing IVR surveys must understand their tradeoffs.

Campaigns which cannot justify or budget the cost of traditional live interview polling can opt for an IVR poll to provide basic baseline information such as name ID, trial heats and perhaps opinions of a few issues to help set the campaign’s direction or conduct simple tracking polls to make adjustments in established campaign plans. IVR polls should not, at least at this time and in our opinion, be relied upon as the exclusive rational for directing significant paid media and crafting strategic and micro-targeted field plans.

At least in high profile races, IVR surveys have a history of providing reliable data where the electorate sits on issues or in support of a candidate, but IVR Polls tend to fail as the complexity of a poll increases. Their success in lower profile and low level races is largely unknown, because there is little data publicly released for races at this level.

One example of an IVR survey breaking down is in an early vulnerability / viability survey where a lengthy informed question is asked about both or multiple candidates or to present both sides of a public policy debate. IVR polls, so far, are unable to take note of a confused participant and clarify and repeat statements. Strategic message testing is also difficult to implement in IVR polling for similar reasons. Open-ended questions are also very difficult to conduct effectively within IVR surveys, leaving out invaluable "issue concerns" that help determine the relevance of the campaign’s various messages.

IVR surveys can be conducted quickly, regardless of scale because of their ability to survey a large number of people in a very short period of time. They create a very large pool of "likely" voters and their proponents suggest that respondents are somewhat less likely to lie to please a machine (than they are to a human) on screening questions regarding civic responsibility issues like their likelihood of participating in an upcoming election[1]. Detractors of IVR surveys say that the enlarged pool of "likely" voters also relates to a response bias issue where potential respondents are more likely to refuse participating in a survey conducted by a recorded voice than one conducted by a live interviewer[2],[3].

We should point out that political polling is typically performed under constraints that many other surveys do not encounter. Budget, time constraints and the nature of political campaigns frequently call for adjustments to ensure reliable and credible data regardless of the survey being conducted by a live interviewer or computer.

IVR polling has some additional limitations to address these adjustments. Traditional live interview surveys are better able to speak to certain demographics on their introduction, such as asking to speak to the youngest male over 18. These factors and others lead IVR surveys to rely more heavily on weighting to bring their results into accordance with known demographics. These surveys also have a limited ability to prevent unqualified respondents from participating in the survey and there are some difficulties in validating respondents in general. These factors require IVR polls to rely on "adjustments" of the data after it is collected rather than simply getting it correct at the initial point of data collection.

Every campaign occurs under unique dynamics and every survey faces unique circumstances which create different trade-offs to consider in the decision process of whether to employ a traditional live interview or IVR survey.

There is no such thing as a perfect poll, traditional or otherwise. IVR technology is constantly improving and people are becoming more comfortable with these types of surveys as they become more prominent. Despite these advancements, it is vital to take the advantages and a few of the drawbacks of IVR polling into consideration before making a decision to utilize an IVR poll for your campaign.

The bottom line is that IVR polls should not be relied upon for the collection of comprehensive strategic data such as traditional benchmark polls, but these surveys should be considered for some basic tracking surveys and when traditional live interview polling isn't a viable option because of budget constraints.



[1] Kaus, Mickey. 2004. "Dem Panic Watch, Part III" Slate.com, May 2. Link

[2] Quigley, Fran. 2003. "Under-counting Julia Carson: How Effective Are Political Polls?" Nuvo.net, November 13. Link

[3] Sabin, Warwick. 2004. "Survey Says?" Arkansas Times, October 7. Link