Wine Analysis by Region

By Dan Berger  2008-11-2 20:28:49

I appreciate what APPELLATION AMERICA is doing with its Best-of-Appellation™ Evaluation Program in expanding the base of great wines by rigorously identifying and profiling regional distinctiveness, which should be central to defining quality, not an excuse for bad wine making.

The way wine is rated by some experts, relaying quality statements to readers by use of numbers 50 to 100, always has seemed a bit odd since the concept of quality is anything but universally understood.

It’s a bit like saying a particular Jackson Pollock is “better” than a particular Picasso, or a Beethoven Symphony is “better” than a Stravinsky symphony. The conceit becomes even more ludicrous when you compare different forms of music (or any other art forms) that simply cannot be compared, such as Mahler’s Kinder-Totenleider and the works of Kool Herc and the Herculoids. Sure, both are thought to be music and both are sung, but once past the vocal chords, the ability to compare ends.

I prefer to judge regionally based, double-blind wine competitions where the judges are told the region from which the wines come. Moreover, I prefer a moderate number of wines, and the results are best understood to reflect the overall perception of a wine to the day of tasting and for the next several months. The relevance of all evaluations fades as time passes.

The general idea here is that quality varies and includes regionally distinctive variants. We all know that this is a crucial aspect of making meaningful decisions on a wine’s quality. A few examples here are apt.

hen we taste a Sauvignon Blanc and are tempted to rank it very highly, we often determine it to be typical of its region. Such as a rating that heartily approves of the cut-grass, lime, gooseberry aroma of a Marlborough Sauvignon Blanc. Not just New Zealand, but more specifically Marlborough. This contrasts with the style of Sauvignon Blanc that isn’t as assertive in the gooseberry, but has a faint bit of tropical character, such as the Craggy Range of Martinborough. Or the even milder grass of Hawkes Bay. There is a difference, and Sauvignon Blanc purists would be pleased to suggest that all the various styles are valid, and that each area makes exceptional wines with differing characteristics that show evidence of their terroir.

By contrast, tasted blind, I have often seen Sauvignon Blanc wines from the Sierra Foothills that deliver a mostly Graves-like character. Is this a
One reviewer once said he judged 20,000 wines a year. This means that he tasted 55 wines a day, every day of the year.lower-quality wine? The fact is, when you buy a Graves, you are seeking out the character of truly magnificent Sauvignon Blanc that has a Graves character, with that stony-mineral aroma and tactile tannin structure plus a bit of oak. Even those Sierra Foothills SBs that don’t have any oak still carry the mineral elements of Sauvignon Blanc as displayed best in Bordeaux, so the scores for the best of them should be equated somehow with the scores of the best of white Bordeaux. They almost never are. Indeed, do the major wine publications even know that Sierra Foothill Sauvignon Blanc is akin to Graves? I have never seen the comparison made by others.

Then there is the Sauvignon Blanc with a touch of Loire herbal-ness. It may come from Dry Creek Valley, and the skilled judge with decades of experience can use a sense-memory to reflect back on the Dry Creek Sauvignon Blancs he or she has tasted over the years and watch the wine as it evolves in the glass and the bottle over time. Another Sauvignon Blanc comes along with more olive/grass/hay character. Could it be Russian River? That’s certainly what the cooler areas of Russian River yield in Sauvignon Blanc.

As is evident, the frame of reference becomes a key factor in visualizing different approaches with the same grape. And this is all terroir-driven, and it broadens the scope of fine wine we have access to. (However, when Sauvignon Blanc is put through a malolactic fermentation, or aged in oak, or subjected to other regimes, the terroir-ishness seems to be scalped.)

This regional sort of analysis should lead to discovery of a lot more interesting wine, and not what often happens: the most assertive wine gets the top score and the others take the caboose. With the vast majority of those who evaluate wine, such regional distinctions may not be a germane issue, but they have a real place in the world of wine. This is most important when the competition is not a regional one, but a national or international event.

How many times have we seen reviewers speak of a Pinot Noir as having Burgundian character, or a Bordeaux as being rich enough to be Napa-like? It happens all the time and we are, in most all cases, speaking of the region as the key element that gives a character we recognize and one that contributes to a wine’s quality.

When judgings are done in which the general region is known (such as at the Sonoma County Harvest Fair wine competition, at which I often judge), we know the fruit in the wines all came from Sonoma County. This may not seem much of an advantage in determining quality, since Sonoma is such a vast region, but it can be.


Disclosure of Origin
Knowing the region does a lot for a blind tasting. For instance, it helps the taster to justify an element that he or she might originally think to be “out of left field,” and thus aberrant. This is especially true if a particular characteristic (such as black pepper in Cabernet Sauvignon or red currant in Cabernet Franc) begins to show up in more than one wine in the evaluation. When such characteristics are replicated in disparate wines (such as the black pepper we see in numerous red-wine varieties grown in British Columbia’s Naramata Bench), we can ascribe to that character a regional basis.

In some competitions where the judges are not even told the varietal being judged, the evaluator is really at a loss to determine if a wine is displaying the proper characteristics for the wine type it is supposed to be. This is one reason, for example, why categories of wine that say “blended red wines” and no other identifier, are so hard for any judge to properly evaluate. It is one of the ways that judgings in Europe can be confounding. One basic tenet of such evaluations is: “Here is a white wine. We will tell you nothing else about it. It’s your job to state whether it is a great wine or a poor wine.” What a nightmare. Is it a Chardonnay that smells like Riesling, or is it an oaky Riesling?

But look at the way many if not most wine consumers buy wine: by number. Rating wine by numbers, a popular sport among many people around the world, is rife with fallacies, many of which were outlined decades ago by the late Prof. Maynard Amerine at the University of California at Davis.

In a book called Wines: Their Sensory Evaluation, Amerine listed a few of the common fallacies inherent in blind evaluation of wine. That chapter in the book also hinted at how to avoid some of the problems. In normal circumstances, it’s difficult to avoid some complications, and doing so is often time-consuming.

One element that Amerine and co-author Maynard Roessler failed to touch upon (partially because it had little to do with what the book was all about) was the vital reason that wine should be tasted blind with at least a small bit of additional information given to the evaluators. I suggest that knowing the region from which a wine comes is vitally helpful in determining the level of quality a wine displays - especially if the goal of the tasting is to determine how regional character plays a role in quality. To try to do so after the fact seems a bit like inverting the process. Indeed, if the purpose of an evaluation is to simply see which wine has the most oomph, regional character is a non-issue.

A famous wine critic, one famed for not tasting wine blind, once stated as a fact that saying a wine displayed terroir was nothing more than excuse for making bad wine. This naïve statement ignores one of the elements that have made show judgings in Australia as interesting as they are. And it’s one of the reasons that some European competitions fail the ultimate test: relevance. It’s also why people spend what they do on Bordeaux – why, for instance, St. Julien is considered to be a “better” region than is Blaye.


Too Much Information
Those who evaluate wine with sight of the label have data on which to base a score that has nothing to do with the liquid. Knowledge of the price and other factors about a wine (including a cult-like following for some iconic wines) make it far simpler to justify a score.

Amerine, who disliked evaluations done with sight of the label, was far more blunt in person than he was in print. I met the gentleman at a number of events in the mid-1980s and the tight-lipped, no-nonsense and quite professorial academician had little good to say about such open evaluations when they resulted in a hard, fixed number.

Rating wine is a noble calling that requires a solid background in the craft, which I count more in decades than in years. It is why so many U.S. wine judges, at both competitions and in print, are woefully unskilled. Well-intentioned though they may be, many are ill prepared. It also is why Australian wine show judgings are so good, relatively speaking. That is because there is a mandate that to become a senior judge, a person must have spent at least a decade as an associate judge. And associates in Australia are usually persons who are involved in the wine trade in some fashion. There are very few wine collectors.

You might assume that the sensory evaluation textbook by Amerine and Roessler (a mathematician) ought to be the primary reference for this sort of an
One of the key goals of wine evaluation, using the 20-point scale that was developed at UC Davis was to determine the commercial soundness of wines made by students and then submitted for professorial evaluation.article. It cannot be because it was aimed largely at an academic audience, and much of the wine evaluations done at UC Davis, which used the methodologies outlined in the book, were of student-made wines. Thus the scores achieved under that setting never aimed at finding wine of a commercial nature. One of the key goals of wine evaluation, using the 20-point scale that was developed at UC Davis was to determine the commercial soundness of wines made by students and then submitted for professorial evaluation. That was the reason for the establishment of the UC Davis 20-point rating scale.

Curiously, the evaluations that were staged for student-made wines rarely used the 20-point scale. One former UC Davis student, now a Napa Valley wine maker, told me that most such wines were made simply to test whether this addition or that addition made a significant difference in a wine. Students would make a wine under proscribed situations; there were stringent rules and often the aim was to test a scientific theory, such as filtration or crop level. “Most of the time we didn’t even use the 20-point scale,” he said. “The goal was just to see if there was a difference, so we used the duo-trio test or a triangular test. Moreover, there was a time-sensitivity to these evaluations. The date on which the test was done was at least as important as any result of the test. And when the 20-point scoring system was used for a wine, the score it achieved might not reflect the real quality of the wine. One professor told me years ago that a wine scoring 16 points could easily be preferred as a drink to one that (academically) scored higher.

The wine maker to whom I spoke said, “Oh, sure, there were all kinds of goofy wines you could make that could score high on the [UC Davis] 20-point [scale] that you’d never want to drink!”

Years later, the 20-point scale was adapted to rank wines that were commercial – which was not its intended purpose. Moreover, in his sensory evaluation book, Amerine defined fine wines as those that had either varietal or regional character, and which he said were less than 14 percent alcohol. The vast majority of today’s wines are over 14 percent and varietal character is on the run. Amerine would be aghast.

And after the 20-point scale was adopted for commercial wines, and later when the 100-point scoring “system” evolved, it became clear that it was a lot simpler to rate a wine as 98 on the day of release. But unlike the time-sensitivity of the testing done with student-made wines, the commercial wine, once judged, seems to keep its number forever. Thus the 98-point wine is forever being compared with other 98s (as well as 97s and 99s) – wines that may have been tasted at different times, under different conditions, and possibly with bottle variation playing a role or not. What got a 98 a decade ago is not the same wine now, but people with good memories for numbers love to remind all who will listen that they are drinking a 98-point wine.

The mathematical appearance of the use of numbers implies that there is science behind a wine’s score. It appears to be a result of precise, logical thinking, reasoning, and careful analysis. But the vast majority of scores are nothing more than a judge’s instantaneous gut reaction to a whiff and a sip. A score is not a fact, it is an opinion. Moreover, one fallacy that Amerine never mentioned is that the more wines one evaluates, the less precise the results will be.

The UC Davis scale was not designed, I argue, for evaluating mass numbers of wines in a day, such as the 252 Chardonnays I was asked to evaluate the first day of a wine competition two decades ago. If that were the case, imagine the time it would take to write down 2 points for clarity, 4 points for aroma, 3 points for general quality, etc., and then add up their scores to get 88? Even if asked to use a score chart, tasters will taste a wine and then give it an instant dart-board-throw final score.

Now let’s look at the fatigue factor. One reviewer once said he judged 20,000 wines a year. This means that he tasted 55 wines a day every day of the year. And what if he took weekends off? That figure rises to 77 wines a day. The human nose and palate, like all other aspects of the anatomy, can take only so much repetition before the inevitable result is a serious deterioration in accurate judgments – regardless of how a score is arrived at.

Then we get back to Amerine’s personal distaste for open (with sight of label) evaluations. Assume that certain factors that were not mentioned by Amerine come into play (high tannins, very high alcohols, high pH). At some point, not only is the sheer number of wines a daunting task to perform with accuracy, but the impact of the wines is greater than it ever was decades ago, when wines were better balanced and lower in alcohol. Most fine wines in the 1970s were in the 12 percent range. Today, alcohol levels have risen to roughly 15 percent. That is a 25 percent rise in alcohol levels and a consequently massive assault on the innards of our mouths, meaning we can taste many fewer wines with “morning fresh” accuracy. So with the mouth getting more numb the more we evaluate, open evaluations become that much more “reliable.”


The Wine Rating Insurance Policy
Ah, but knowing in advance that you’re tasting, say, Shafer Hillside Select Cabernet Sauvignon, is an insurance policy against giving a wine too low a score if tasted blind. Meanwhile, the more massive a wine is, the more likely it will be for the assaulted mouth to access flavor. Subtle characteristics, by contrast, are seen as “wimpy”, a subject we have previously addressed.

At this point in open evaluation, it is not only tempting to factor the brand’s image into the final score -- it’s inevitable. It’s not possible to avoid the baggage we all bring to such endeavors. Under such circumstances, can anyone tasting with 100-point scale sight of the label be totally objective? Yet the scoring goes on. And it’s practical. I do it all the time. Setting up scrupulously blind tastings is not easy; it’s time-consuming, and can be confounding. Still, much of the open evaluation is done by people with an insufficient education to do it with any degree of relevance.

Thus do the biggest, flashiest wines get the greatest plaudits. And even when people judge wines at a blind tasting, such as a county fair judging, how many ask themselves questions about the wines before putting down a recommendation? Almost no American judge I know, I can say from my experience. It’s easy to make wines that are atypical. Have you ever smelled a Pinot Gris that was barrel fermented and had a lovely “Chardonnay” aroma? Is this a great Pinot Gris, with its oak splaying out all over the place? Or is it merely an oaky wine made from a grape that would rather not be put through the indignity of barrel fermentation? What a nightmare! And what about those Pinot Gris, which were fermented with a terpene-sensitive yeast strain that smelled more like Muscat? They are not typical, and it may be possible that they do not have much regional character either.

Knowing the region is, for me, a more valid way to determine if terroir character plays a role in a particular region. In fact, it has shown itself clearly in Australia, where Coonawarra wineries successfully opposed expansion of their appellation into an area that now is called by another name (Wrattonbully). It has subsequently become quite clear that the two regions produce different styles of Cabernet. And a key point here is that, although the wines are different, both are appreciated for their distinctions, and great wines have been identified from both. I have done strictly blind tastings where the Coonawarra and Wrattonbully wines were clearly identifiable and distinct from one another.

I appreciate what Appellation America is doing with its Best-of-Appellation Evaluation Program in expanding the base of great wines by rigorously identifying and profiling regional distinctiveness, which should be central to defining quality, not an excuse for bad wine making.

It may be simplistic to say, but the top-scoring wines in the major glossies all seem to have similar attributes. Or put more succinctly: narrow wines for narrow minds.


From appellationamerica

© 2008 cnwinenews.com Inc. All Rights Reserved.

About us