Lies, damn lies, and statistics: or is Robert Parker the god of wine?
It is always interesting when folks make assumptions about fields of which they have little or no knowledge and then make broadly dismissive claims about their assumptive knowledge.
This often happens when people apply statistics to “prove” their argument. A great example is a recent post on Vinography.
Alder Yarrow, the author of Vinography.com (who is very fond of creating straw men and then knocking them down) uses a study by the Cornell Hospitality Institute to bolster his argument that Parker is not a god.
Well, not exactly. He uses this graph from the study showing the numerical scores given by Robert Parker, Wine Spectator, and Steven Tanzer to illustrate his contention that the scores by all the critics are essentially the same. This, he argues, helps prove Parker does not have a “monolithic palate” and does not influence the creation of Parkerized wines.
When you are presented with such poor logic it is hard to know where to start.
The scores are not really the same
Let’s begin with the statement that the scores given by Parker, Wine Spectator, and Tanzer are essentially the same. This assertion assumes that the 100 point scale is really a 100 point scale. But let’s face it – it’s not. None of these critics ever gave a first-growth wine a score of zero (a bottle with a cork in it gets at least 70 points).
The scale of scores on the graph ranges from approximately 86 to 96. To me, that looks like a ten point scale, with an offset of about 85. Given that there is only a 10 point spread from high-to-low, a difference of 1 point is really 10% of the variable scale. In school, a 10% change with respect to your peers is about a letter grade change, say from a “B” to an “A”. Since Parker’s scores for thirty years have averaged 1.3 point higher than Tanzer, Parker is in effect giving wines a grade letter higher, on average, than Tanzer.
The 100-point scale and the 90-point threshold
Another problem with the 100 point scale is the effect of the magic number 90. Looking back on our school days, a score 90 or above was an “A”. Wines that receive a score of 89 are treated like a “B” wine. If a critic gives higher ratings to wines, such that they cross the magic 90 barrier, sales of that wine will increase.
So to drive a market, all that is needed is a slight nudge by a critic over or under the 90 mark. If Parker gives the highest ratings, in particular around the magic number 90, it would be an indication that Parker does influence wine style, not the contrary as argued by Alder.
There is no price data to compare the scores against
Finally, Alder bases his argument on influence, by which I believe he means influence on price. To determine if any of the critics have this kind of influence would require looking at the sales of wines as a function of the scores given by the various critics over time. This would require a lot more data than are presented by Alder.
How blind are these tastings, really?
More accurately interpreted, the question raised by Alder’s presented graph is not about which critic is more influential, but rather if wine ratings by different critics are truly independent assessments.
This is a question I have struggled with for some time. I engage in blind tastings, but some of the assumptions surrounding blind tastings do not meet scientific standards. For example, wines from specific regions or of a certain type are tasted together. This practice greatly reduces the validity of the assumption that the critics are “blind” to the wines they are tasting. Additionally, many wine critics have great taste memories. I know I can spot wines from specific vineyards and makers, and I do not have the talent expressed by these critics. So I find it doubtful that they are without this additional input when tasting and scoring a wine.
You can’t avoid the community of wine
Finally, all of the critics Alder talks about have been in the wine industry for 30 years or more. In that time relationships are formed, people learn to like or dislike each other, and these relationships can influence the work of the critics. For example, when the critics at the Wine Spectator found that many of the bottles of wine from one California winery were corked, they went first to the vineyard to alert them to the problem.
This may have been the “right” thing to do, but it is not the act of an independent critic, but rather the action of a member of a community. As a member of a community, there are unwritten rules and agreements that affect the behavior of its members.
In the end, Alder’s conclusion that Parker is not that influential may be correct, but there is nothing in his post or in the graph he presents that proves it.
Check out these related posts: