Wednesday, December 05, 2007

Be prepared for the "Re-Elect" reports and beware of the numbers

Around this time of year you'll start to notice newspapers reporting "re-elect" numbers in their headlines and press releases and fundraising memos from candidates stating Congressman XX has a re-elect number of only XX% (Always way below 50%).

They will arrive from a few different forms of questions...

Thinking about the 'upcoming' election for U.S. Congress, do you think you will vote to reelect (NAME OF CONGRESSMAN), will you consider voting for someone else, or do you think you will vote to elect someone else?

Do you think most of the Democrats in Congress deserve to be reelected, or not?

Do you think (NAME OF OFFICIAL), the Representative in Congress from your district has performed his or her job well enough to deserve reelection, or do you think it's time to give a new person a chance?

These "re-elect" question may take other forms as well.

Beware of these numbers when reported on their own, without other supporting information.

"Re-elect" questions, in our experience, usually reflect suppressed levels of support for candidates and don’t show a true status of an incumbent's re-election standing. For example, in 2006 we polled in a Midwestern congressional district for a prospective challenger. The incumbent had a very low (26%) re-elect number, but that same official had nearly a 50% positive job approval rating and a personal favorability rating that was twenty points higher than the re-elect assessment. We've seen similar discrepancies between re-elect questions and other incumbent assessment items in our surveys and other polls throughout the years.

The "Re-Elect" question, in its various forms, should never be interpreted on its own as the tell-tale sign of an incumbent's prospects. It should only be factored in the evaluation when it is accompanied by related questions whose data also support its conclusion.

Accompanying indicators should include such measurements as job approval ratings, personal favorability ratings, and trial heat numbers. Job performance is the best indicator of whether a politician is meeting the expectations of his or her electorate. Job performance is almost always the best indicator of an incumbent's current re-election chances. Personal favorability ratings will reveal the depth and which direction voter sentiment leans towards an incumbent. The personal favorability rating identifies if an incumbent's ratings are driven by soft -- passive name ID, simple partisanship, or if there is real, personal like / dislike and, intense and deep favorable / unfavorable sentiment behind the official's ratings. The trial heat will place the candidate in a ballot simulation with another candidate(s), complicating the election by injecting numerous factors, including party labels and, not the least of which being the level of recognition and personal favorability voters feel towards the challenger(s). These factors, taken in the whole, provide a comprehensive review of an incumbent’s prospects.

Re-elect questions can be useful when interpreted and reported in conjunction with these other related factors, but should never be considered a strong, accurate read on an incumbent if reported independent of other variables.

So, we advise anybody who reads a news story, gets a press release, or sees a fundraising memo proclaiming an incumbent is in dire straits because of some low re-elect number to ask for other supporting data to verify such a conclusion.

[Note: For the reasons stated above and to ensure credibility and accuracy in our surveys, Fako & Associates rarely asks re-elect questions in our polls. When clients request the use of the question, we will only use and report it in conjunction with other questions such as personal favorability and job performance ratings.]

Thursday, October 18, 2007

Components of Political Polling: The Benchmark Analysis

Campaigns need a detailed analysis that highlights the key findings of a survey and interprets the findings into usable strategic recommendations. A good analysis always goes significantly more in-depth than the "slide show" presentations that have become popular in recent years. A quality analysis provides direction, focus and discipline to a campaign.

Continuing our series on understanding political polling, we've previously discussed the components of crafting a survey and how to read crosstabs. Now, we will discuss the components that should be present in an in-depth strategic analysis of a typical comprehensive benchmark survey.

An analysis is essentially a detailed strategic playbook for a campaign in three parts. A good analysis will be one part reporting key findings, one part analysis and strategic interpretation of those findings, and another part interrelating these findings and interpretations into cohesive strategic advice.

The analysis should report the significant findings of the survey and the pollster's strategic interpretation of the data. As the report progresses, a strategy is developed as all the pieces of the puzzle come together, with a final conclusion and a detailed understanding of the candidate’s situation, dynamics of the election and a clear (or best available) path to victory.

All surveys should start with background information on the survey and other pertinent information. Sometimes campaigns will overlook this information, but it is critical so we'll take some additional time to address the importance of the 'technical stuff.'


Survey Methodology

Analysis reports for all surveys should contain a statement on methodology. The methodology statement should contain several fundamental facts about how the survey was conducted. At a minimum, the sample size, margin of error, the days the survey was conducted, if screening and/or weighting techniques were used, and how the survey was conducted should be present. It also should identify the polling firm and who commissioned the survey.

While many polling organizations are hesitant to reveal their specific scientific sample designs on reports for public release, the methodology statement should contain some insight on which sampling methods were use and their intended purpose. This gets into how a survey was conducted. This post isn't intended to discuss methodology statements, a topic we may cover another day.

Methodology statements are important to campaigns because they help address concerns over the quality and accuracy of collected research data. Unless the methodology statement ensures the statistical validity of inferences about the studied population, a survey won't earn credibility with potential donors, supporters, campaign staff or the media and should not be trusted to guide your campaign.


Survey Information / Glossaries

Most analysis reports will contain references to various demographic and attitudinal groups using terminology that may need defining because pollsters use their own terminology. Basic terms may be as simple as referring to regional classifications (ex. "This specific group of precincts represents this neighborhood..."). Slightly more complex terms may reference overall supporters and opponents or further classifications such as Weak/Strong Supporters/Opponents. The glossary should define how these groupings are arrived. Depending on the complexity of the analysis, a glossary may be defining combinations of attitudinal groups.

It also is important for surveys to identify the demographic composition of the survey, which to a degree may be accomplished in the glossary. This is important to evaluate if the survey is representative of known demographics or if it is identifying a demographic shift that older data does not reflect.


The Political Environment

The following sections deal with something we refer to as the political environment. Campaigns must accommodate for several factors not within the control (or limited control) of the campaign, but have significant impact on how the campaign must operate.


Voter Intensity

Election turnout is not predicted by political polling, but the voters' interest (intensity) in an election and trends in voting patterns are exposed through survey data. One of the goals of a benchmark survey is to determine the environment in which a campaign must operate. Benchmark surveys typically ask voters (often through a series of questions) how interested they are in participating in an election.

It's vital to know if your campaign will have to overcome a deficit of interest to energize your base or potential supporters. Gauging voter intensity among your opponents’ supporters helps to reveal the level of activity and interest in your opponent. Voter intensity can also reveal if certain attitudinal groups are likely to express their feelings through the ballot box (ex. Conservative Men list "School Taxes" as their number one issue concern and have a significantly higher level of interest in the election than the population as a whole).

This small piece of survey data is critical and can provide a lot of information related to the overall interpretation of the poll.


Partisanship & Ideology

Through demographic questions, a benchmark level survey should be able to determine variations of self-identified partisanship across the various regions, demographic and attitudinal groups in your survey. It's helpful to know where your partisan base of support is, which areas are ideologically aligned with your positions, if voters are independently minded in terms of voting, if voters revert to partisan voting with lesser known candidates, etc.

Is your opponent dependent on his or her partisan label or is there cross-over support? Is promoting your partisan label a good thing in this district or does it preclude the voters from even listening to your message? What is the partisan and ideological makeup of undecided voters and other key target groups? These questions and many others can be addressed through testing partisanship and ideology and analyzing how these factors will impact the election.


Mood of the Voters

The mood of the voters section of an analysis lends the campaign insight into how pessimistic or content the voters are feeling. The mood of the voters can be arrived at through several different types of survey questions, including the common right track / wrong track questions. The mood of the voters is a significant because, depending on their mood, the voters could be primed for messages of change, less responsive to such messages, more accepting of an incumbent's messages of accomplishment, or more likely to reject messages of accomplishment.

Through cross tabulation, your pollster should be able to tell the campaign what is the driving factor behind the mood of the voters. Sometimes the mood of the voters has little to do with how they feel about a particular elected official.

A benchmark survey also helps a campaign understand what the voters are concerned about and how the voters feel about known significant or controversial issues. Knowing what the voters are concerned about in general helps the campaign refine its communications to speak to the concerns of the voters. Having an understanding of the voters' opinions on complex issues (like immigration, abortion, same-sex marriage) can save a campaign from making disastrous decisions such as shifting the focus of a campaign’s message in a way that clashes with the voters or seeking a third party issue endorsements that alienates the candidate from the voters.


Issue Concerns

Our surveys usually include a volunteered (open-ended) most important issue concerns section, although many pollsters use a listed version of this question (which we feel provides less useful information). These questions typically reveal the macro problems that the campaign already has an awareness of, to a degree. At the micro level, the most important issue concern can expose undercurrent and developing issues and allow for micro-targeting of particular demographics which significantly express concern over a certain problem more than other voters.

It is particularly important for the issue section of the analysis to relate the voters' concerns with the messages that were tested in the survey because messages that test strong will be inherently stronger if they relate to a key concern of voters. Knowing the most important issue concerns and on which voters they have the greatest affect and how they interrelate with a candidates agenda (message) will help the campaign target persuasive messages and develop a communications plan that ties into key concerns and the candidates strongest points.


Opinions of Significant Issues

Occasionally there are significant and unique developments and issues that will have a direct effect on the election. These can include national, state, local and sometimes campaign specific issues. Pollsters do not tell candidates what their positions should be on the issues, but they can provide guidance on how to deal with the issue and how that concern may factor into the campaign.

It’s important to know what voters think about key issues in order to determine if your position is helpful or a hindrance to your campaign. Your poll can determine which demographics are likely to be "single-issue" voters though the use of issue questions and follow-up questions on voting behavior.

Reproductive freedom and stem cell research are examples of two hot button issues where a campaign needs to know if their position would make better sense as a broadcast or targeted message.

Benchmark level surveys also evaluate opinions of the candidates and other public individuals or organizations that could have an impact on the campaign. Knowing the voters' opinions of a candidate and his/her opponent lets a campaign know whether they have to focus time and money repairing image problems, defining themselves to the voters, if they can define their opponent or if the opponent is too well known, etc. Political surveys also test support for candidates in various configurations of trial heat simulations and test the campaign’s messages for persuasive value.


Opinions of Candidates and Other Public Figures

A political survey is more than just finding out who is winning and who is best known. Benchmark level surveys allow for an in-depth review of the candidates. Personal favorability and job performance ratings where an incumbent is involved are typically included in benchmark level surveys.

These items give an overall picture of where a candidate stands in the minds' of the voters. More in-depth surveys will offer a series of open-ended questions asking specifically about positive or negative aspects of either candidate. This type of questioning can prove invaluable in tight races, especially when one of the candidates is a longer-term incumbent or well known public figure.

Discovering the voter's opinion of the candidates will allow the campaign to know if a candidate has deep rooted appeal, passive support, if the voters think he/she is doing a good job and, through open ended questions, can discover nuances in opinion that can expose potential vulnerabilities or strengths.

Beyond the candidates, a campaign needs evaluate opinions of other significant public figures (or organizations). This data helps the campaign decide if it is worth expending resources to promote an endorsement, or which demographics to target information on a specific endorsement and if an opponent's public supporters can impact voting behavior.


Trial Heats and Informed Trial Heats

The horse race question is the standard-bearer for political surveys. A simple trial heat can tell you which candidate is currently leading if the election where held today and, more importantly, it can tell a campaign where and among whom each candidate is performing better and worse. The analysis of this component of the survey should detail where each candidate garners their support, how deep and strong their support is and identify each candidates' areas of weakness.

An informed trial heat is designed to simulate the effects of an engaged campaign, presenting balanced positive and negative messages about each candidate. The end result of the election scenario tells a campaign what is possible with their messages and themes they plan to implement. Most importantly, this section will determine if the core message works in direct contrast to the opponent(s)' message and can identify movement among the various demographic and attitudinal groups -- helping refine strategy.

The interpretation of this part of a survey should include detailed demographic profiles of various targets groups such as undecided voters, strong and weak supporters of the candidates, persuadable voters and any other group the pollster has identified as vital to the campaign. This section of the analysis also should highlight the key objectives of the campaign and the targets to meet those objectives. For example, the poll may identify that your Democratic candidate has significantly lower name ID and support among Democratic women than their male counterparts, so a key early base building objective of the campaign would be to raise name ID and favorable image of your candidate among this group.


Messages

It is better for the campaign to have its messages right the first time, than to spend the rest of the campaign correcting it.

A benchmark survey tests the persuasive value of a campaign's set of messages. There are various ways to accomplish this -- through contrasting statements, statement choice tests, informed ballot test and through tests of isolated issues / messages / language and other means. Benchmark level surveys typically test batteries of positive and negative messages on all candidates.

Through the analysis process, it is possible to determine which messages hold the most overall strength and, just as importantly, among which subgroups and key target groups each message is most persuasive.

With an understanding of the top issue concerns in the district, campaigns can utilize message testing to speak directly to targeted sets of voters on issues that they are most concerned. Testing messages isn’t about finding out what to say; so much as it is to find out if what the campaign plans to say is worth saying and who best to say it to. Message testing is used by campaigns to narrow down their messages to those that are both relevant and most convincing.

Campaigns can also test the value of narratives and themes the campaign is considering using. A well fitted narrative and a comprehensive theme can tie together the various policy priorities and beliefs of the candidate into a cohesive message that works for its political environment.

The analysis of the poll should: recommend the overall thematic connections for the message; suggest language that emphasizes this narrative; synthesize the message; identify what components and language are the best for the campaign to utilize; and, identify the strongest points of contrast with your opponent(s).


Analysis Conclusions and Message / Target Group Demographic Profile

We also feel that all analysis should have a conclusion that ties all the pieces together into a simple, easy to use format. This conclusion clearly needs to show the political environment, dynamics of the election, key target groups, the demographic profile and the messages that work best with these groups. We like to use a simple target and message profile chart -- basically a "reminder sheet" for the campaign team.

This summary will synthesize the more detailed and complex components of the analysis into a simple to use actionable document. It will highlight the campaign's key objectives, targets and message. This conclusion serves as a constant reminder for the strategic team of where the campaign needs focus.

Friday, September 14, 2007

Are you a D.I.Y. or Campaign by Committee Candidate?

As we move deeper into the 2008 election cycle and campaigns at all levels gear up, we see a process issue debated within campaigns about how to properly manage a political campaign.

There are essentially two models of operating a campaign, the "strong leader" and the "Campaign by Committee." The complex nature of modern campaigns and the finite period of time between inception and Election Day require campaigns at every level to have extraordinarily good management.

When considering running for elected office, potential candidates should understand that a "campaign" is an entity, not an activity. Professionals who work campaigns at ground level typically describe the process of building a campaign as forming a large business over night; and, describe the feeling in the office the day after the election as walking into a ghost town.

There is an old axiom in the political world: "campaigns by committee lose." Campaigns by Committee are set to fail. They fail because decisions aren’t made quickly and differing parts of the "committee" debate and micro-manage infinitesimally small details of the race such as color of signs, what picture to use, the content of press releases, what events the candidate should attend, taking a position on an issues, etc. These types of campaign committees are absent a leader, other than the candidate, who has the command and authority to end the decision process and make final judgments.

This committee system wastes time in the decision making process. The opportunity cost of the decision paralysis (that inevitably comes from the committee process as the stress of the campaign process sets in) is where the death of campaigns by committee arrive. The cost of campaign time and resources in debating decisions is opportunity and opportunity's benefits forgone, like fund raising, contacting voters, and moving the campaign forward. Time and money are the most valuable resources a campaign has; they need to be spent wisely. If a campaign falls behind in money, it can possibly surge back, but time can never be recovered.

Campaigns at all levels need a dedicated and experienced decision maker who can execute decisions with total authority and in a timely manner. The decision maker or "strong leader" cannot be the candidate. A candidate can't do everything in a campaign; in fact, a candidate should only focus on meeting voters, raising funds and recruiting volunteers.

Campaigns at every level need a campaign manager. The campaign manager (CM) is the overall coordinator of the race and should have directive control over all operations and strategic decisions of the campaign. The campaign manager is the chief administrator of the campaign and the candidate. A campaign needs someone who can see the forest for the trees and make the trains run on time, with the ability to direct the candidate's schedule and coordinate with campaign staff and consultants to oversee the successful implementation of all aspects of the campaign's operations and strategy.

Here are some of the basic responsibilities of a campaign manger:

  • The campaign manager should have control over the campaign's finances (i.e. nothing gets spent without CM approval).
  • The CM must have directive control over all staff and consultants and all staff / consultants ultimately report and are accountable to the CM.
  • The CM must have the authority to direct the candidate’s activities and schedule (with candidate input).
  • The CM should be the final decision maker on all operational and strategic decisions (with input from staff, consultants & candidate) and he/she must have the authority to resolve internal disputes and make decisions between differing / conflicting recommendations to the campaign.
The CM reports to the candidate, but the candidate must understand that the manager is the person who is in charge of the day to day operations of the campaign.

To review, a candidate can't and shouldn't try to do it all. No matter how competent the candidate is, the candidate's time is best spent serving the functions of a candidate. A campaign can't be a democracy or even a republic; it needs to be run by be strong leader. Campaigns simply don't have enough time to debate the pros and cons of every decision. A campaign needs an authority figure, a campaign manager with the experience to manage the complex and high stress organization that is modern political campaigns and someone with the authority to execute timely decisions. These are the components of a winning campaign structure.

Tuesday, September 11, 2007

Two common mistakes that cost local campaigns their elections.

Campaigns outside of major population centers, lower budget and local campaigns often rule out conducting strategic opinion surveys for two reasons: they think they can't afford a strategic survey and the candidate and/or staff believes they know the city or district where they are asking for votes. These mistakes set up local and lower budget campaigns for failure.

While having a comprehensive survey that allows for an in-depth evaluation of the election dynamics and gauging voters' philosophical outlook would benefit any campaign, that level of information is not necessary to develop a winning strategy for a campaign that will likely be driven by direct voter contact (field campaigning) and direct mail.

A poll, regardless of size and length, should be designed to develop a strategy. As we've discussed previously on this blog, the cost of polling is arrived at primarily though the number of interviews conducted and the length of the survey (in minutes, not necessarily the number of questions). Even a shorter poll can highlight the persuasive winning arguments and expose the strengths and weaknesses of the candidates. A modest poll doesn't mean you are only going to find out who is winning, losing and better known… All polls are about developing and refining strategies that maximize the use of a campaign's finite resources.

Localized races like District Attorney, County Board, and Alderman and Mayor (in smaller cities) can make use of smaller sample sizes (depending on the size of the voting population) and shorter surveys that last 5 to 10 minutes. Surveys of this scope can accurately evaluate the opinions of the candidates and other public figures, test the strength of support for the candidates, identify key targets, and test a limited number of messages for the campaign while not breaking the campaign's budget.

One of the main objectives of a lower budget campaign in seeking polling should be to find the message(s) that have the widest appeal to the largest amount of voters and niche messages that are targetable to subgroups of appropriate scope.

When determining which messages tend to work better with certain subgroups, it may not be possible to determine with high statistical accuracy that a particular message works better with a very specific, micro-targeted subgroup in poll with a small sample size. Larger and more practically targeted subgroups will allow for a statistically reliable reading of which message is most persuasive among broader subgroups and the sub-set will be useful and practical for your direct mail and field campaign budget.

To give an example on targeting for a lower budget campaign, let's assume the campaign's survey reveals that your candidate has weaker support among younger women. Generally speaking, it is more effective for a lower budget campaign to use a more practical, broader direct mail targeting criteria (such as "women under 50" instead of "Women 18-25 in households with children with net income over 100K") and deliver a message that tests well among many voters within a broader subgroup. Distributing your best messages to the largest amount of voters as possible is vital because campaigns at the local level already are dealing with a small numbers of voters, are lower profile, gather less earned media, and the candidates are typically less known and get less attention than other higher profile campaigns.

An experienced polling company should be able to evaluate the messages your campaign wants to test and help you refine them in a way that will maximize the quality of the data your survey will provide. Subgroup analysis to determine message targeting is still possible with smaller sample sizes, even with the inherently higher margin of error that a smaller sample size brings. The bottom line is that a campaign shouldn't spend money for a survey that provides data that the campaign’s budget won't allow it to use.

The other common mistake of lower budget and local campaigns is assuming a level of knowledge about the voters, their concerns, and what messages will work for their campaign. While it is possible to knock on every door in a smaller city or county, it is impossible for a candidate or campaign to gather the type of information that is revealed by a random sample survey of likely voters. Even the briefest of political polls will screen for likely Election Day voters and sample the electorate in proportion to the predicted Election Day turnout. The screening process helps to ensure that the opinions reflected in the survey are those of voters who will likely participate in and ultimately decide the outcome of the election. The response campaigns receive from activists, supporters, and voters at the door do not generally represent or generalize to the opinion of all those who are likely to cast their ballot on Election Day. Polling provides vital strategic data to compliment the anecdotal information that your campaign will receive.

Getting wrapped up in the cost of political polling and the need for full-blown, multiple micro-targeted direct mail campaign often reinforces the belief among candidates and their staff that their campaign doesn't need strategic research to understand their electorate. The reality is that lower budget and local campaigns can rarely afford to go into an election without the research that a basic survey can provide. Good information is the foundation of a successful campaign strategy. Thinking that political polling is too expensive and that they know their electorate well enough cost campaigns their elections.

Tuesday, August 28, 2007

Cross Tabulation Tables

Cross Tabulation tables (often referred to as "cross tabs" or "tabs") numerically display the results of a survey, showing how the answers to one poll question break down according to the answers to another poll question or other data derived from the sample.

Generally speaking, cross tabs are detailed statistical / numeric analysis of a survey, but in a raw, unfinished form. Your pollster will use cross tabs to generate a written analysis that will serve as your campaign's strategic guide and tactical play-book.

It is important to understand what basic components are necessary to generate a good set of cross tabs that will in turn give you the data needed to make strategic and tactical decisions. Although many campaigns have staff that understand how to read tabs, a campaign should not have to probe through cross tabs to get answers from their poll -- that's the job of the pollster.

Unlike a "topline report," which reports the breakdown of answers to a specific poll question in terms of the overall percentage of all respondents, a cross tab shows how the results of questions (such as who is winning in a trial heat) relates to a variety of demographic and attitudinal classifications.

(Figure 1, Click Image to Enlarge)
Technically speaking, a cross tabulation displays the joint distribution of at least two or more variables (e.g.: "Male" & "Candidate 1 Supporters" or "Male less than 50 years old" & "Candidate 1 Supporters") (See Figure 1).

The topline report might show Candidate 1 capturing 24% of the vote and Candidate 2 capturing 50%, with 26% undecided. A cross tab (See Figure 2), however, will show candidate 1 capturing 24% of the vote overall, but also shows candidate 1 capturing 31% among voters in the South Region and performing better among voters over 65 with 32% of their vote.

(Figure 2, Click Image to Enlarge)

Now that you have a basic understanding of how cross tabulations work and are read, it's important to know which demographics and attitudinal groups should be included in your cross tabs.

Every pollster has their preferred way of structuring questions and response items. We've created a very basic set of sample cross tabs with easy to understand response items to help show you some possible combinations withing the tables.

As you saw in Figure 2, demographic questions are among the most essential components of your cross tabs. You need to know how regional differences affect the opinion of voters; as we discussed the importance of a regional break-down in the previous section of the components of a survey. Beyond regions, your cross tabs may or should include, depending on the scope and purpose of your survey, demographic data like Gender, Age, Race, Political Party ID, Political Ideology, length of residence, education levels, income levels, marital status, if the household has school aged children and so on.

Cross tabs allow for more intricate demographic breakdown. As shown in Figure 2 and Figure 3 below, it's possible to combine variables to arrive at combinations of demographics, such as Men under Age 50 and Female African American voters. The combinations are nearly endless, but finite groups tend to statistically break down in reliability as the sample size decreases among the groupings and the margin of error increases. Your survey sample size will largely determine the level of micro-analysis capable within your cross tabs.

Beyond clearly defined demographic groups, your crosstabs should include various special attitudinal groups. The special groups may be defined through various variables based on the respondent's opinions and answers to both demographic and substantive questions in the survey. These groups can represent key targets for your campaign and a demographic picture of these voters can be developed by utilizing these groups in your cross tab analysis, allowing the pollster profile key target groups. Attitudinal groups (See Figure 3 & 4) can help you determine the demographic breakdown of various significant groups such as weak/strong supporters/opponents and undecided voters and variants of them such as undecided non-Democrats and uncommitted women, etc. They can include variables such as voters who support your party's top of the ticket, but do not support your candidate and swing / persuadable voters who in the course of the survey showed movement from undecided or supporting your opponent to supporting your candidate, etc. and numerous other possible attitudinal options. Depending on the construction of your survey, you can also determine who are single issue voters on hot button issues. There is a wide array of attitudinal possibilities, only limited by the scope of your survey.

(Figure 3, Click Image to Enlarge)
(Figure 4, Click Image to Enlarge)

In the end, a well constructed cross tab report will provide the numeric foundation and data for any campaign's strategic plans and are the root "source" for a pollster's written analysis and strategic advice. Pollsters who produce intricately detailed, useful tabs that go way beyond the standard factors will provide campaigns with higher quality strategic advice. Pollsters using this level of detail provide more precise direction on message, targets and strategy that increases the probability of developing a successful plan.


Tuesday, August 07, 2007

Components of Crafting a Survey

Most successful endeavors require prior planning; a successful political survey is no exception. The quality of your survey is dependent upon the value of the information gathered before it is crafted. There are generally seven research components needed to develop a strategic survey. We will briefly outline these criteria, explain their significance in surveying, and simultaneously review the various components that should be present in a good benchmark level survey.

A benchmark survey is a comprehensive evaluation of the election dynamics, candidates and issues. They are used to develop your strategy and message. They serve as your campaign's tactical guide and strategic "play book." Benchmark surveys usually are 15 - 20 minutes in length, or longer. It should be as comprehensive as your budget allows.

Unlike like Stephen Colbert’s interviews in his Better Know a District series, you have to have a basic understanding about the region where you will be polling.

What’s the population? How would you describe the various areas within where you want to poll? What are the characteristic’s of the area’s economy? Does the area have a unique history?

Population and voter registration data help determine the polling sample size needed to efficiently and conduct a statistically accurate survey. All political surveys should have multiple regions that represent something usable to your campaign. The regions that get established for your survey have to be significant for your campaign.

Is the West end of town mostly town homes? Is there a significant minority concentration on one side of town? Are there a handful of distinct townships within the region to be polled? Are these groups of precincts representative of a higher-income area? Are these groups of precincts virtually not walkable by the candidate or volunteers?

Questions like these should be asked when developing regions and the list of demographic classifications that you will find useful to have in your survey (such as any combination of gender, race, income, education, union status, political ideology, party ID, etc). Members of your campaign need to be able to look at your survey analysis and quickly determine what’s happening, among whom, and where.

Knowing population data (and vote share information) will help you determine what percentage of the vote each region represents. The campaign should work with the pollster (if needed) to find the projected turnout for the district and your winning 50% plus 1 number. Knowing where the vote is coming from and the opinion of voters in the various regions can help not only focus your campaign’s message, but also help in the allocation of the campaign’s budget.

Answering preliminary questions about the area you plan on surveying establishes the background to keep in mind throughout the survey process. Knowing a little about the district’s personality serves as a guide through the survey data collection process and makes it easier to spot any potential abnormalities early on.

When crafting a campaign, you also need the key players within the district.

Are there key officials, public figures or private individuals who would have an interest or impact on the campaign? Would an endorsement from a certain official be a help or hindrance? Is it worth spending campaign time and money to promote a particular endorsement? Will campaigning or sharing resources with another candidate hurt or help my campaign?

A proficient benchmark survey will test the personal favorability ratings of people who are of an interest to the campaign. Knowing if someone holds significant substantive recognition ratings and if they are viewed positively or negatively will help determine if valuable campaign resources should be expended in seeking or promoting endorsements and/or affiliations.

If public officials are being tested in the survey, their job performance ratings are usually also tested. When in the position of a challenger, the job performance rating and personal favorability of the incumbent is always tested. Incumbents should test their own favorability and performance ratings to help reveal any underlying potential weaknesses.

Job performance and personal favorability weaknesses oftentimes will reveal weaknesses that aren’t apparent in the benchmark survey's trial heat of candidates or issues. Sometimes voters will lean towards a known and disliked incumbent over an unknown challenger. For example, voters who responded that they are “weak supporters” in a trial heat question filtered into a group that show weak intensity toward a candidate personally or in their job performance rating through cross tabulation reports. It’s possible to narrow down the demographic list of voters that should be targeted by the campaign in this way through a variety of response position combinations.

Classifying groups of voters based on their support and feelings toward candidates or issues is not so useful in itself, but knowing what strategic message to communicate to target groups to achieve a desired effect completes the purpose of the survey.

Detailed background information on the candidates or issues must thoroughly be reviewed, prioritized and incorporated into the survey. This information could contain both helpful and damaging information. Depending on the budget of a campaign, this type of information will contain variations of the following: summaries of public records (tax liens, lawsuits, etc.), news clipping, issue positions, campaign finances, endorsements, other court/judicial records, interviews, tax records, voting records and other vital documents.

This information gathering process is called Opposition Research (aka OR, Oppo) and Vulnerability Research (research about yourself or the campaign issue). They are critical components that must be completed before a survey is crafted. There are several reputable information gathering firms who specialize in generating detailed strategic OR reports. Lower budget campaigns can conduct their own baseline research, but they need to be careful about allowing volunteers gather OR information, especially in candidate campaigns where zealous volunteers may not be diligent on fact-checking.

Opposition research has received a bad name in recent years (Negative attacks, "Swift Boating,"etc.), but OR itself is not inherently harmful to the democratic process. The ethics of OR is a discussion for a different posting, but remembered that the information tested in a survey is always left to the discretion of the pollster and the campaign. A good pollster will lend you his or her experience and guidance on whether to test controversial information and advise your campaign on utility of the information and the repercussions that could occur.

Along with knowing information about opponents, you or your issue, and, details about the area to be polled, it’s also important to understand all major or pertinent issues that may affect the campaign, including local issues. Polling allows you to test voter's opinions on important known issues.

Is a big-box retailer petitioning to move into town? Will reproductive freedom be a wedge issue brought into the campaign by your opponent? Is there a statewide debate brewing about same-sex marriage, or funding stem cell research?

Conducting a political survey about important issues will reveal not only the opinion of voters, but also the intensity they feel towards issues. You’ve probably heard the expression of a "poll driven politician," your campaign doesn’t want that association. Polling should not be used to determine your (or your campaign’s) issue positions, but rather it should help you develop an approach to talking about an issue (or whether to talk about an issue), discover natural allies (demographically, regionally), and, refine your approach in how your campaign communicates its messages about the issues.

Not every issue will be known by your campaign. A good benchmark survey will incorporate an open ended question early on to directly ask the voters what issue or problem they are most concerned about. Often times this question will reveal issues which your campaign is already aware, but open ended verbatim responses also reveal how issues are being discussed and lend insight to why voters feel the way they do. This form of open ended questioning sometimes reveals upcoming issues that haven’t boiled to the surface.

Knowing where voters stand on the issues will help refine your message. A benchmark survey can test the persuasive value of the potential arguments in support of your campaign and the arguments against your opponent(s). Testing messages isn’t so much about finding out what attacks stick against your opponent, but rather discovering which of messages hold the most persuasive value and most correlate to the concerns of the voters. Your campaign needs to speak to the concerns of the voters, testing messages through a benchmark poll keeps a campaign on target by speaking to the issues with messages that resonate with the voters.

This primer on the components of a benchmark survey is meant to familiarize information seekers on the formation process of a political benchmark survey. While in no way would we consider this discussion comprehensive, it has hopefully answered some preliminary questions about planning for a political poll. Benchmark polls should be conducted as early as possible in a campaign. To summarize, a benchmark poll’s purpose is to get the strategy and message correct early in the campaign, rather than spending the rest of the campaign fixing it.

Monday, July 23, 2007

The Smoke-Free Illinois Act is Signed

Click here to see a video report about the signing!

With the signing of Senate Bill 500 into law as the Smoke-Free Illinois Act, Fako & Associates, Inc. would like to congratulate all the organizations that have worked tirelessly to protect public health in Illinois. We especially thank the American Lung Association, who lead the Smoke Free effort around the state. We also would like to thank the American Cancer Society and the American Heart Association for letting us be a part of making Illinois a Smoke Free State.

The signing of SB500 is a strong commitment by the State of Illinois to save hundreds of thousands of lives each year from the deadly affects of Second Hand Smoke. We are proud to have been a part of this effort with our statewide work and our city-specific initiatives in the city of Chicago, Oak Park and in our state’s capital, Springfield, Illinois.

News on SB 500:
Blagojevich is poised to sign smoking ban
Blagojevich signs statewide indoor smoking ban
Blagojevich Signs Law Making Ill. Public Places Smoke-Free
Governor signs smoking ban bill
Governor signs statewide smoking ban


Wednesday, July 18, 2007

Cell Phones & Political Polling

As we enter the presidential primary polling season articles are already surfacing questioning if cell-phone only users are distorting polling data. The common response to these allegations is that cell-phone only households account for 12.8% of US households, and their responses are not likely to be much different from their associated demographic group. Initial research has shown cell-phone only voters not to be married, younger, more mobile, less affluent, more likely to be a minority, and more liberal. Pollsters have summarized up to now that the cell-phone only demographic are those less likely to vote and likely are not all that different from other voters in their (18-34) group. We've consistently heard that cell-phone only voters have a minimal impact on results (one to two percentage points) and fall within the margin of error on most standard surveys.

Younger voters, the go-to demographic to describe cellular-only voters, do indeed vote and are a significant part of the electorate (We've explained this before on this blog.) Recent research from the Pew Research Center has dispelled another myth about cell-phone only voters. They are not the same as their land-line counterparts. The Pew Research Center conducted a dual frame survey between Land Line and Cell-Only voters. Across the survey's 46 different questions, there was an average of 7.8% difference between the two groups, with a range of differences from 0% to 29%. The Pew Research Center estimated that the maximum change in the final survey with cell-phone responses added would only account for 2%. The mean change accounted for less than a percent (0.7%). Those results are within the margin or error for most surveys.

While today Cell-phone only voters are concentrated among the demographic groups outlined above, there is evidence to suggest that the proportion of cell-phone only households is only likely to increase and by a result of growth in “wireless” services and aging demographics, the profile of a cell-phone only voter is evolving. Wireless substitution will grow exponentially as the cost of services continues to decrease and their quality and convenience increases. Wireless substitution is also complicated by the ever-evolving technology available. For example, T-Mobile recently launched a Wi-Fi and Mobile Calling integrated phone (dual mode GSM/WiFi) that allows their cellular phone to become a “landline” phone while in range of their wireless router. Other service providers are currently developing similar services.

Some pollsters suggest weighting samples to account for the uncovered population (such as weighting up certain demographics in a college town based on known demographics and voting behavior). While this tactic in general may work in larger, national surveys, a political survey for a state legislative district that captures less than 5% of the 18-34 demographic cannot be reliably weighted within that age group for several reasons. Weighting may serve well as a stop-gap attempt to keep polling data accurate if we believe that as more Americans become cellular-only voters that their opinions will normalize to the population as a whole.

Other “soft sciences” have explored the “digital divide” and its effect on communities. As communication technologies change and integrate into internet hybrid services, such as VOIP (voice over internet protocol) and WiFi based UMA (Unlicensed Mobile Access), voters will continue to be further segmented along various demographic lines associated with access to digital communication technology. We do not believe there will be a dramatic "normalizing" effect between the various groups. No one can be certain yet where demographic lines will be drawn along still emerging technology channels.

Calling cellular phones is complicated for several reasons. Federal law prohibits the use of automated dialing devices when calling cell phones. Calls to cellular phones have to be done by hand, greatly reducing the efficiency of call centers. On the other end of the cost spectrum, cellular users typically have a cost per minute rate plan. Cellular-based voters would incur a cost to participate in a survey, which would likely drive down participation rates. While compensation / reimbursements could be offered for the use of a participant's cellular minutes, this would add additional cost to conduct to surveying. There are also issues of liability. What legal responsibilities does the pollster have if a respondent is driving while participating in a survey and gets into an accident? Cellular phones are also not tied to a geographic location and create difficulties in screening eligible participants. A respondent may live in one area but maintain previous address's area code and phone number. As we've discussed, cell-only voters of today tend to be younger and create some challenges in accurately developing a balanced sample for a political survey.

Moving beyond the problems of contacting cellular voters, some pollsters have increased their reliance on internet polling, such as pollster John Zobgy of Zogby International. While internet polling has its own drawbacks, a conversation for another day, the process of mixed-use polling (to borrow a term from urban planners) seems like the most reliable and cost-effective method of reducing sampling and non-response problems associated with cellular-only households. Mixed-use polling is a blending of surveys such as traditional telephone surveys and accompanying internet polls. The methodology needs to be refined and adapted, but the future of traditional telephone polling is apparent. Adapt or Bust.


Monday, June 04, 2007

IVR Polling

In 1998 and 2000, campaigns began using a technology called IVR (Interactive Voice Response). This technology was primarily used for traditional voter contact and GOTV calls -- and also widely implemented for surreptitious "push polls." It was only a matter of time before this technology would be used for polling.

In 2002, we saw the first widespread use of this technology in polling -- mainly with publicized media polls. By 2004, the use of this technology for publicly released polling became commonplace, with several organizations and media outlets routinely releasing polling data using this technology.

This has created a lot of inquires about this type of polling.

First, it is important to understand the pros and cons of the technology.

IVR is a new and constantly improving technology in the relatively short history of telephone driven public opinion surveying. These IVR surveys typically employ random digit dial (RDD) and use a recorded voice instead of a live interviewer. Their main advantages are their lower production cost, low entry barriers, and the speed of which they can be implemented and which data becomes deliverable. Their disadvantages, however, are numerous and campaigns considering employing IVR surveys must understand their tradeoffs.

Campaigns which cannot justify or budget the cost of traditional live interview polling can opt for an IVR poll to provide basic baseline information such as name ID, trial heats and perhaps opinions of a few issues to help set the campaign’s direction or conduct simple tracking polls to make adjustments in established campaign plans. IVR polls should not, at least at this time and in our opinion, be relied upon as the exclusive rational for directing significant paid media and crafting strategic and micro-targeted field plans.

At least in high profile races, IVR surveys have a history of providing reliable data where the electorate sits on issues or in support of a candidate, but IVR Polls tend to fail as the complexity of a poll increases. Their success in lower profile and low level races is largely unknown, because there is little data publicly released for races at this level.

One example of an IVR survey breaking down is in an early vulnerability / viability survey where a lengthy informed question is asked about both or multiple candidates or to present both sides of a public policy debate. IVR polls, so far, are unable to take note of a confused participant and clarify and repeat statements. Strategic message testing is also difficult to implement in IVR polling for similar reasons. Open-ended questions are also very difficult to conduct effectively within IVR surveys, leaving out invaluable "issue concerns" that help determine the relevance of the campaign’s various messages.

IVR surveys can be conducted quickly, regardless of scale because of their ability to survey a large number of people in a very short period of time. They create a very large pool of "likely" voters and their proponents suggest that respondents are somewhat less likely to lie to please a machine (than they are to a human) on screening questions regarding civic responsibility issues like their likelihood of participating in an upcoming election[1]. Detractors of IVR surveys say that the enlarged pool of "likely" voters also relates to a response bias issue where potential respondents are more likely to refuse participating in a survey conducted by a recorded voice than one conducted by a live interviewer[2],[3].

We should point out that political polling is typically performed under constraints that many other surveys do not encounter. Budget, time constraints and the nature of political campaigns frequently call for adjustments to ensure reliable and credible data regardless of the survey being conducted by a live interviewer or computer.

IVR polling has some additional limitations to address these adjustments. Traditional live interview surveys are better able to speak to certain demographics on their introduction, such as asking to speak to the youngest male over 18. These factors and others lead IVR surveys to rely more heavily on weighting to bring their results into accordance with known demographics. These surveys also have a limited ability to prevent unqualified respondents from participating in the survey and there are some difficulties in validating respondents in general. These factors require IVR polls to rely on "adjustments" of the data after it is collected rather than simply getting it correct at the initial point of data collection.

Every campaign occurs under unique dynamics and every survey faces unique circumstances which create different trade-offs to consider in the decision process of whether to employ a traditional live interview or IVR survey.

There is no such thing as a perfect poll, traditional or otherwise. IVR technology is constantly improving and people are becoming more comfortable with these types of surveys as they become more prominent. Despite these advancements, it is vital to take the advantages and a few of the drawbacks of IVR polling into consideration before making a decision to utilize an IVR poll for your campaign.

The bottom line is that IVR polls should not be relied upon for the collection of comprehensive strategic data such as traditional benchmark polls, but these surveys should be considered for some basic tracking surveys and when traditional live interview polling isn't a viable option because of budget constraints.



[1] Kaus, Mickey. 2004. "Dem Panic Watch, Part III" Slate.com, May 2. Link

[2] Quigley, Fran. 2003. "Under-counting Julia Carson: How Effective Are Political Polls?" Nuvo.net, November 13. Link

[3] Sabin, Warwick. 2004. "Survey Says?" Arkansas Times, October 7. Link


Friday, May 18, 2007

Youth Voting

To: Interested Parties
From: Fako & Associates, Inc.
Re: Youth Voting

Harvard University's Institute of Politics (IOP) was established in 1966 as a memorial to President John Kennedy and has engaging young people in politics and public service as its mission. I attended a conference at the IOP in March 2007 to discuss the preliminary findings of the Institute's youth survey and to listen to top campaign managers and consultants discuss their successful tactics for engaging the youth vote.

I have compiled notes from the conference and a few other sources related to youth voting in this report and have added my own commentary, as a campaign professional in the field of political polling, to elaborate on the topics covered by the various speakers at the conference.

Understanding the Youth Voter

At least half of all young people under the age of 24 do not have access to a land line telephone. In the polling industry, we have come to accept that it is difficult to reach the 18-24 year demographic with traditional RDD or voter file driven surveys. Traditional surveys average about 5% or less of this demographic. It is unfeasible to weight such a small sample. The current way to discover what young voters think and believe is to conduct surveys online, and even this method has its perils.

It’s important to realize that the youth vote is about new voters. Young voters who where 18 in November of 2004 are going to be 22 in 2008, but those who were 14 in 2004 are going to be 18 in 2008. Progressive Policy Institute's "Trade Fact of the Week" highlighted that the youngest voters in the next presidential election were born a year after the fall of the Berlin Wall; turned three as the World Wide Web went public in 1993; entered elementary school as the dot-com bubble took off; watched the 9/11 attacks at age 11; and the beginning of the Iraqi occupation at 13.

The institute's pollster, John Della Volpe, discussed the four pillars of political socialization for youth voters (families, school, churches or religious communities and friends) and how the breakdown of two pillars has created a vacuum of non-voters and concentrated the importance of the remaining pillars. According to Della Volpe, leading up to the 2004 election, a majority of America's young people, up until the age of 18, lived in homes in which no parent present in that home had ever voted. The family transfer habit of voting has broken down. He continued by discussing how schools are not teaching about civics because of two waves, first, after Vietnam, it was seen as a form of propaganda and second, with the emphasis on testing in the 1980s and 1990s, it was squeezed out.

Without the family transfer habit of voting and the teaching of civics, that leaves churches, communities and friends. While churches were the go to location when describing GOP strategy in the media during the 2004 cycle, the Institute's study revealed that ideology drives only a rough 25% of young people. They refer to them as “religious centrist” and determined them to also be “Swing Voters.” Demographically they tend to be more African American and Hispanic, conservative on some issues, such as abortion and same-sex marriage, while more liberal on other issues.

Approval of the United Nations is sometimes used in polling to gauge the philosophical outlook of voters. According to the survey's findings, youth voters, by a measure of three to one, believe the U.N. and other countries, and not the U.S., ought to take the lead in solving international crises and conflicts. This is an important view into the lives and the world view of young people and may help explain why Kerry did so well among the youth demographic in 2004. As we shall see, issues related to the Iraqi occupation and the war in general are the top concern (at 43%) of young voters.

Top Dozen Issue Concerns among All Youth Voters ([1])

Issue
Percent (%)

Iraq

29

War (General)

14

Economy

6

Environment / Global Warming

5

Foreign Policy

4

War on Terror / Terrorism

4

Health Care

4

Education

4

Immigration

3

Abortion

3

Poverty / Welfare Issues

3

Domestic Security

3

According to Della Volpe, despite the difficulties inherent in reaching and messaging to youth voters, in 2004 there were more votes cast among young people under 30 than seniors over 65. Seniors are typically the first demographic group that campaigns target, but by about a million, there were more votes cast between voters between the ages of 18 and 29 than over 65. We have been speaking about a highly engaged, presidential election. What about state-level and local elections on off-year elections?

Campaign Tactics to Engage the Youth Vote

Campaigns, especially down-ballot and local campaigns are not in the business of promoting the long-term viability of democracy. Youth voters are new voters and more expensive to contact than any other group. The majority of youth voters come from households without strong voting histories. How do campaigns decipher what young voters care about and how to get them involved without breaking the budget or burning campaign time?

Della Volpe suggested bringing your candidate to Starbucks, or some other places where the youth vote meets, and find out firsthand what these voters are concerned about. Depending on the campaign’s location, this could be feasible, but unlikely to be the best use of campaign resources and candidate time in finding out what young voters care about. Other campaign professionals suggested finding out what the youth voters are concerned about through youth staff.

Several panelists suggested using technology to bring youth people into the campaign, but not simply relegating them to canvassing, phone banking and youth coordinator positions. Campaign managers like Greg McNeilly, who ran the DeVos gubernatorial campaign in Michigan, suggested placing some young people in leadership rolls, such as community chairs. McNeilly said this tactic has a significant impact on other young people, seeing their peers in leadership rolls and dramatically helped the campaign to recruit other young people.

Location-based volunteer recruitment is also an important factor. A couple of the successful campaigns actually located their offices near college campuses so students could walk over or have a very short drive to volunteer. This makes a difference in the ability campaigns had to attract volunteers to come send e-mails, make calls or lick envelopes.

Campaigns in college areas organized students to do “dorm storms” where they would do massive amounts of voter registration. Again, the youth vote is about new voters. Many young people come from families without a transferred habit of voting. College students move frequently, even during the timeframe of an election, so voter registration or petitioning is accompanied by e-mail and cell-phone gathering. Some campaigns created detachable contact information forms on their petitions.

Summer interns shouldn’t be forgotten just because they went back to school in the fall. Some campaigns had a specific way to keep those people involved, keep them talking to their friends, keep them sending emails. Organizing and simple campaign functions don’t need to stop just because someone isn’t physically able to get to the campaign office.

Cutting-edge campaigns are riding the open source movement in politics. “Macaca Gate” opened the eyes of campaign professionals to the distributed power structure of the internet. Anyone with a cell phone camera or a WIFI enabled laptop can change the dynamics of a campaign. People are now capable of participating in campaign dialogues like never before. User generated videos, text messages, social networks, blog, vlogs and their comments and virtual precinct captains are dramatically changing the brick and mortar mentality of political campaigning.

Virtual precinct captains are a vital source for any campaign, but especially lower budget campaigns. Serving the roll of a traditional precinct captain, someone interested in working for the campaign can get trained and download all the materials they need to start organizing their precinct without ever having stepped foot in a campaign office. This is an ideal approach for getting young people involved because of the low hurdles associated with communicating over the internet.

Some campaigns found that text messaging was most successful in giving people action alerts, participate in radio call-ins, online surveys, and checking out something uploaded to their website. Turnout for regional events can be bolstered by SMSing supporters by zip code. As technology progresses, campaigns can send a Get Out the Vote streaming video message to video enabled phones. Campaigns can even fundraise via SMS, but currently are limited to contributions under ten dollars. As time progresses the cost of setting up such a system will decrease and the contribution amounts will rise.

Along with open source campaigning, campaigns need to learn to give up some (but not all) control and let people have some autonomy. Some campaigners allowed individuals to set up their own fund raising sites for the campaign and set up their own pro-candidate sites, created their own videos, and develop their own social networks for the campaign. This sort of organic campaigning is becoming ever more prevalent with fundraising sites like ActBlue and social network (Web 2.0) communities like Myspace and Facebook. Anyone can set up a blog on a service like Blogger and start fundraising in less than a half hour.

Technology can only go so far. Face to face contact is the most persuasive element of campaigning, regardless of the demographic. A 2001 study found that youth to youth canvassing, direct contact, knocking on their doors, increases impact eight to ten percent, the phone bank three to five percent, versus a targeted mailing to youth which had a neutral impact.

Down ballot campaigns don’t need to be technology aficionados, they just have to do better with the youth vote than their opponent. Get the youth in the office and involved, creed them some control, especially with new media, to be creative and give them the tools to build your campaign.