As one can tell from the deluge of data, surveys are easy to do. In the financial world it seems not a week goes by without the release of some survey results. Most of these contribute nothing unexpected, because they simply parrot the questions of other surveys—which is not always bad—but sometimes the results may not be what they seem.
Representativeness refers to how well the survey sample represents the larger audience. For example, a survey stating “98 percent of Americans say the St. Louis Cardinals are the best baseball team ever” may not represent America if the survey sample was taken in a two mile circle around Busch Stadium. A good sample can be difficult to obtain in financial surveys for several reasons. One is that surveys usually rely on self-reporting and people tend to embellish their income and assets. Another problem is people reporting what they think they should report or intentionally misstating their answers. All of these issues are why multiple surveys covering similar ground can be a good thing—if different groups provide similar answers the results are more likely to be accurate.
Intentionally biased questions and responses are more often found in non-financial areas. These are the ones where the headline is “People surveyed agree that…” and the only answer choices were: Completely agree, agree, and partially agree. More often are cases of poorly defined questions that can be interpreted in different ways. For example, the question might ask whether a person feels there will be enough income in retirement to cover their needs. The problem is that one might respond thinking of needs being food and shelter and another may feel they need tropical vacations and five star dining. Although the survey answer may be helpful from a psychological perspective, it isn’t very good if you’re trying to pin down a dollar answer for a group.
Even when you have representative, well-written surveys another issue is understanding what the results really mean. Say that a survey finds that 30 percent of people say they feel they won’t have enough money to cover food and shelter in retirement. Let’s say the statement is correct and the consensus is that this is an unacceptably high percentage. Before solutions can be proposed we need to understand the why behind the number and this requires more questions. If further questioning reveals those surveyed feel this way because they don’t know how to save, then financial education might be the solution. If the answer is they won’t have enough money because every time they get a few dollars in their 401(k) plan they withdraw and spend it, then all of the “nudging” ideas—where people have to opt out to avoid contributions—will prove useless. For this group mandatory, untouchable retirement accounts may be the only answer. All too often survey results are used as proof of a problem or solution that the survey never proved.
There are certain red flags to look out for. The confidence interval should be at least 95 percent and the margin of error around three percent—this means the sample reflects the population and the actual result will be plus or minus three percent from the percentage given. Another is the sample itself; a sample consisting of millennials will generally produce different results than one of retirees; the sample group should mirror the population you’re looking at. Open internet surveys—the kind where anyone can answer—generally attract those that feel strongly and this tends to distort the results. However, targeted internet surveys sent to a known population are okay.
Better data is obtained through observation—watching what people really do and what they say. The problem is that these studies are far more expensive than doing a survey. In the meantime, we’ll keep seeing survey results, both the good and the bad.