ABOUT OUR POLLING

Frequently Asked Questions

Below we address some frequently asked questions regarding our survey research approach, technology and methodology.

 

How Does Morning Consult Ensure Polls Are Accurate?

Morning Consult oversees all aspects of the survey research process. In order to ensure the utmost accuracy in our survey research, Morning Consult is constantly evaluating performance, monitoring demographic shifts, and conducting general due diligence to certify that our methodologies and systems are always meeting the highest standards.

Morning Consult Political Intelligence (MCPI) conducts daily interviews with a representative sample of over 5,000 registered voters and 4,400 likely voters in the United States. 

We take a number of steps throughout the survey process to ensure our polling is accurate and representative:

1) Sample Size & Demographic Depth: MCPI conducts daily interviews with a representative sample of over 5,000 registered voters and 4,400 likely voters in the United States. We collect over 100 demographic variables on every single national and international survey, which enables us to provide insights on numerous subgroups and apply an advanced weighting methodology. 

2) Questionnaire Design & Survey Experience: Morning Consult follows best practices around scale construction, question wording, survey design, and randomization to ensure validity and minimize bias. Additionally, Morning Consult has implemented a wide range of quality assurance checks within the survey to ensure we are getting respondents true response:

  • Open-ended Task: We include a randomly generated, basic task asking respondents to solve an addition or subtraction problem.
  • Multiple choice Task: We include a randomly generated, basic multiple choice pattern question with 10 response options (e.g., “Which of the following is not a color?”, “Which of the following is not a letter?”) that has one correct answer.
  • Timing Tests: We exclude any respondents who complete a national survey in less than one third of the anticipated median length of the survey.
  • Attentiveness Tests: We include a set of implausible statements in grid-type questions. Any respondents who answer implausibly to these questions are excluded.
  • Straight Liner Checks: Adults who select the same response (e.g., Very Favorable) for more than 90% of statements in the media battery or brand favorability battery are excluded.

3) Survey Sampling and Post-Stratification Weighting: Morning Consult uses a stratified sampling procedure for conducting our daily national surveys. We use the interaction of age (4 levels), gender (2 levels), and English/Spanish language (2 levels), for a total of 16 cells in our strata, when conducting daily national surveys. We have obtained these interactions from the 2017 5-year estimates of the American Community Survey (ACS)

Morning Consult weighting targets are obtained using high-quality, recent, gold-standard government sources such as the U.S. Census’s Current Population Survey (CPS) or American Community Survey (ACS). 

Weighting Variables And Specific Values 

  • Age: 18-29, 30-44, 45-64, 65+ 
  • Gender: Male, Female 
  • Education: High School or less, Some College, College Grad, Postgraduate 
  • Race/Ethnicity: White, Black, Hispanic, Other 
  • Home Ownership: Own, Not-Own
  • Marital Status: Married, Single, Other
  • Population Density: Metropolitan Area (CBSA) Size
  • 2016 Presidential Vote History: Clinton, Trump, Other, Did not vote
  • Race by Education: White/Non-white by HS or less, Some College, College Grad
  • Age by Gender: 18-29, 30-44, 45-64, 65+, by Male/Female

Morning Consult has selected the specific weighting variables outlined below for the following reasons: there is high quality Census data available for each, they change gradually across the entire population, and they are straightforward to measure on survey questionnaires. Furthermore, these weighting measures are often highly associated with key outcomes of interest such as presidential approval and consumer confidence.

For our state-level polling, weights are applied to each state separately based on age, gender, education, race, home ownership, marital status, population density, presidential vote history and — for a subset of states — race by education as well as an age-by-gender interaction.

4) Demonstrated Performance: We validate our results by comparing them to verified, independent data. On every poll, we compare our results to known data on health statistics, labor market characteristics, personal characteristics (like drinking and smoking), and dozens of additional measures from official government surveys such as the American Community Survey (ACS), the Current Population Survey (CPS), or National Health Interview Survey (NHIS). 

 

How did Morning Consult’s polling perform in 2016?

During the 2016 presidential election, Morning Consult had one of the most accurate national polls. Morning Consult survey research data projected Democratic candidate Hillary Clinton winning the popular vote by a margin of 3 percentage points over Republican candidate Donald Trump (with a margin of error of +/-3). Although she lost the electoral college, Hillary Clinton did win the popular vote by a margin of 2.1 percentage points

 

Do people lie to pollsters?

Morning Consult has pioneered research in this space, particularly around social desirability bias and mode effects.

In September of 2020, Morning Consult conducted our third “mode study” exploring whether “shy Trump voters” were impacting the 2020 presidential election. This study – which built off of our previous research in 2015 and 2016 – also asked respondents about other issues such as descrimination, police protests, and even credit card debt. The results show clear evidence that voters are hesitant to express their opinions on discrimination, protests and personal finances during a live telephone interview. And while there is no indication that “shy voters” are affecting the overall national popular vote to a statistically significant degree, nuances in the data leave open the possibility that there could be effects at the margins for both Trump and Biden.

In short, social desirability will alway play a role in data collection because of human nature. That’s why we prioritize providing respondents with the most anonymized forum possible to ensure to the best possible degree that our data is illuminating public opinion, without the influence of social desirability.

 

Are online surveys better than phone polls?

The polling and market research industry has shifted away from landline and cellular phone polling toward mobile and online polling. Landline telephone polling faces a number of critical challenges that make implementing high quality, representative surveys harder. 

First, response rates to telephone polls plummeted from around four in 10 in the late 1990s to less than one in 16 today, according to Pew Research Center. Respondents using cell phones are increasingly difficult to reach because many are under the age of 18, many live outside the area code with which their phone numbers are affiliated, and many are busy or driving when they are contacted. (The New York Times, “What’s the Matter with Polling.”)

Further, about 3.5 percent of households no longer have landline telephones today, up from 2 percent in 2012, according to the National Health Interview Survey. The population is not reachable using traditional phone polling methods, and those factors can lead to higher costs and less representative data.

At the same time, there has been significant movement online over the past two decades among almost every demographic group. Tens of millions of adults take online surveys, which enables researchers to interview larger, more representative samples more regularly. According to Pew Research Center, nine in 10 adults use the Internet, a figure that is up from about 50 percent in 2000. Prominent statistician Nate Silver analyzed results from the 2012 presidential election and found that some of the most accurate results were from survey research conducted online. (The New York Times, “Which Polls Fared Best (and Worst) in the 2012 Presidential Race.”) See also: Pew Research Center, “Evaluating Online Nonprobability Surveys.”

 

Do NOT follow this link or you will be banned from the site!