PEW research center has a reputation to uphold.
They have been in the business of polling and recommending trends. They are one of the better ones though they can make mistakes.
But it is not because of how they pick the sampling size.
There is rigorous probability theory and statistics behind how extremely small sample are shown to predict results of an extremely large population.
If there are mistakes, it goes to the distribution within the samples picked rather than how the size of sample is arrived at mathematically.
It is easy to imagine larger sample size must yield better results. This can be a layman view but it is not supported by both actual results (as in elections) and by underlying mathematics.
There are two classic cases that very large samples actually yielded completely wrong results.
1. Science 14 March 2014: Vol. 343 no. 6176 pp. 1203-1205
Big Data: The Parable of Google Flu: Traps in Big Data Analysis
Google Flu was a project that was a complete disaster. The above article from respected Science journal is not accessible to most, so I will summarize the content
Google wrote a famous paper in 'Nature' about 4 years ago that they can predict trends of Flu outbreak, without any tests simply by analyzing the search terms from various physical locations. The data sample was huge and was in tens of millions. But their predictions missed actuals by very wide margin and they completely missed the H1N1 outbreak of 2009. Actually a sample size of 1000+ would have been much more accurate (which is what CDC - Center for Disease Control was using)
2. "In 1936, the Republican Alfred Landon stood for election against President Franklin Delano Roosevelt. The respected magazine, The Literary Digest, shouldered the responsibility of forecasting the result. It conducted a postal opinion poll of astonishing ambition, with the aim of reaching 10 million people, a quarter of the electorate. The deluge of mailed-in replies can hardly be imagined but the Digest seemed to be relishing the scale of the task. In late August it reported, “Next week, the first answers from these ten million will begin the incoming tide of marked ballots, to be triple-checked, verified, five-times cross-classified and totalled.”
After tabulating an astonishing 2.4 million returns as they flowed in over two months, The Literary Digest announced its conclusions: Landon would win by a convincing 55 per cent to 41 per cent, with a few voters favouring a third candidate.
The election delivered a very different result: Roosevelt crushed Landon by 61 per cent to37 per cent. To add to The Literary Digest’s agony, a far smaller survey conducted by the opinion poll pioneer George Gallup came much closer to the final vote, forecasting a comfortable victory for Roosevelt. Mr Gallup understood something that The Literary Digest did not. When it comes to data,size isn’t everything." The smaller size used by Gallup was of the order of 1000+
(From my notes, and the article appeared in Financial Times)
Mistakes are usually made in how samples are designed. Success rarely depends on the size of people polled.
Shooting from the hip often does not reach the target
This is the lesson.