Wine tasting 'junk science'

Off-topic discussions, non-beer related posts, etc.

Moderator: rsc3da

Wine tasting 'junk science'

Postby jeffjm » Tue Jun 25, 2013 8:37 am

The Guardian apparently has enough time to do something other than tick off the NSA. They recently published an article about research that shows there's a lot of inconsistency in wine judging not just between different judges, but between the same judge presented the same wine:

http://www.guardian.co.uk/lifeandstyle/2013/jun/23/wine-tasting-junk-science-analysis

From the article:

Each panel of four judges would be presented with their usual "flight" of samples to sniff, sip and slurp. But some wines would be presented to the panel three times, poured from the same bottle each time. The results would be compiled and analysed to see whether wine testing really is scientific.

The first experiment took place in 2005. The last was in Sacramento earlier this month. Hodgson's findings have stunned the wine industry. Over the years he has shown again and again that even trained, professional palates are terrible at judging wine.

"The results are disturbing," says Hodgson from the Fieldbrook Winery in Humboldt County, described by its owner as a rural paradise. "Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year.


Does anyone know if something similar has been done with beer, maybe at GABF, or in a homebrew context? Does wine have a structured set of guidelines for judging, something like what the BJCP puts together? How do we know we aren't guilty of the same thing whenever we judge at a competition?
I set out running but I take my time.
User avatar
jeffjm
 
Posts: 383
Joined: Sun Feb 26, 2012 5:16 pm
Location: Crestwood

Re: Wine tasting 'junk science'

Postby Kerth » Tue Jun 25, 2013 4:48 pm

We are. Same beer, two competitions, same judge 42 at one of them, 26 at the other. That is why I take comps. with a grain of salt. It is what it is.
Kerth
 
Posts: 302
Joined: Mon Nov 09, 2009 7:12 pm
Location: Imperial

Re: Wine tasting 'junk science'

Postby jeffjm » Tue Jun 25, 2013 7:22 pm

Sure, I've had 15-point swings more than once myself. One set of judges is in a better mood than the other, or clearly doesn't understand the style guidelines, or their sample was way too cold or too warm, or is blind to diacetyl. Personally, I think the judges who score high are just more intelligent, educated, and perceptive :wink:

What I'm getting at is if a beer judge, given identical samples some amount of time apart, will give consistent scores. 90% of wine judges won't, according to the article.
I set out running but I take my time.
User avatar
jeffjm
 
Posts: 383
Joined: Sun Feb 26, 2012 5:16 pm
Location: Crestwood

Re: Wine tasting 'junk science'

Postby Kerth » Tue Jun 25, 2013 9:05 pm

Same beer. Same judge
Kerth
 
Posts: 302
Joined: Mon Nov 09, 2009 7:12 pm
Location: Imperial

Re: Wine tasting 'junk science'

Postby jackb » Wed Jun 26, 2013 10:20 am

We are. Same beer, two competitions, same judge 42 at one of them, 26 at the other. That is why I take comps. with a grain of salt. It is what it is.


I've heard this type of remark many times. The fact is, the beer wasn't the same. Two bottles from the same batch were judged. They may or may not have had similar qualities. I have first-hand experience with this, having tasted two beers from the same batch, the first of which reeked of diacetyl and second of which had none.

Did the scores fairly reflect the comments? This, for me, is the key question. If the judge noted minor flaws the first time and bigger flaws the next, then I would think the beer was fairly evaluated according to that judge's skills. On the other hand, if the comments were similar between the two competitions and the scores were different, the judge didn't do a very good job.

You should have received at least two scoresheets per entry. Were the scores and comments of the judges comparable within each competition?

There are other variables besides the beer itself that affect the score but they shouldn't result in such large differences in the scores as your 42 and 26. These other variables include the position in the flight, the influence of the other judge(s) in the round, the effect of the calibration beer (if you find your score outside the bulk of the others you will tend to adjust your scores toward the average. That's the reason we have a calibration beer), and changes in the judges themselves.

Finally, what score do you think the beer deserved?
Jack Baty
User avatar
jackb
 
Posts: 126
Joined: Tue Dec 19, 2006 1:15 pm
Location: Saint Louis City, a block or so west of the Botanical Garden

Re: Wine tasting 'junk science'

Postby siwelwerd » Wed Jun 26, 2013 11:37 am

jackb wrote:The fact is, the beer wasn't the same. Two bottles from the same batch were judged. They may or may not have had similar qualities.


+1

Regarding the original question of "consistency", it depends on how you define it. I am highly confident that most of our judges are consistently distinguishing between 26 point beers and 42 point beers. There is less consistency at the extremes--many judges are hesitant to score much over 40, despite making no mention of anything about the beer that can be improved; others (and I find myself in this category) try to be overly generous with seriously bad beer. At the risk of making a "No true Scotsman" argument, I think good judges can and will consistently hit a 5 point range.

That said, there are very good reasons we go to all the trouble of filling out an entire scoresheet rather than just sending back a number. I've always been of the opinion that one should never get too hung up on the number.

I agree it would be interesting to run some analysis. But part of the problem would be experimental set up. It's not as easy as you would think to present the exact same beer; shipping, storage and handling can each affect a beer's score by several points.
User avatar
siwelwerd
 
Posts: 936
Joined: Tue Jun 10, 2008 6:38 pm
Location: Tuscaloosa, AL

Re: Wine tasting 'junk science'

Postby Bob Brews » Wed Jun 26, 2013 4:52 pm

Let me start by saying I am hesitant to reply to this thread for fear the judge with the wide swing was yours truly.
As always, I have to agree with Jack. I have judged a beer, given it a very good score and then tasted it again at the best of show table. It is amazing that the beer I gave a solid score to hours earlier was now one that I have a hard time drinking. It is like the girl at closing time who looks nothing like the girl the next morning.
Anyone that has judged knows about problems like palate fatigue on very hoppy or very flavorful beers. The beer that was a hop bomb 5 samples ago is now barely a pale ale, and the beer you drink after that jalapeno peach bourbon berliner weiss just doesn't taste right. As judges you try to set up the flight of beers in a way to minimize ruining your taste buds but one of you wicked brewers always sneak something in there.
Beer judging is subjective and not an exact science.
Now let's talk about that score range. I say don't worry about the score. As a judge, I give scores to beers as a way to place them on the page as to how well they meet the guidelines I am using that day. One day, a 32 could win a category and the next time it may take a 42 to win and they both could be the same beer. But if I was judging identical beers, I think that the order would be almost the same. The good beers rise to the top of the flight. Putting a number on that top spot is where challenges arise. Sometimes there is negotiations with the other judge who loves or hates a certain beer, sometimes there is a prior beer that is the standard you are going against, and sometimes you are the first beer (which unfortunately can be a kiss of death or the road to glory). Like golf, sometimes 4 under par wins and sometimes 2 over par wins, but that day, it is the best golfer picking up the trophy.
So young squire, do not live and die by the number associated with the beer but rejoice when your beer is ranked well against others. Or not.
Bob Brews
 
Posts: 419
Joined: Tue Dec 19, 2006 2:19 pm

Re: Wine tasting 'junk science'

Postby jeffjm » Thu Jun 27, 2013 2:19 pm

Bob Brews wrote:Now let's talk about that score range. I say don't worry about the score. As a judge, I give scores to beers as a way to place them on the page as to how well they meet the guidelines I am using that day. One day, a 32 could win a category and the next time it may take a 42 to win and they both could be the same beer. But if I was judging identical beers, I think that the order would be almost the same. The good beers rise to the top of the flight. Putting a number on that top spot is where challenges arise. Sometimes there is negotiations with the other judge who loves or hates a certain beer, sometimes there is a prior beer that is the standard you are going against, and sometimes you are the first beer (which unfortunately can be a kiss of death or the road to glory). Like golf, sometimes 4 under par wins and sometimes 2 over par wins, but that day, it is the best golfer picking up the trophy.


"Don't worry about the score" is a good suggestion when it comes to placing. However, I worry about my score when I start thinking about changes to my process or recipe. Was Kerth's beer really in the mid-range of 'good' or was it actually towards the top end of 'excellent'? Obviously one is going to require much more adjustment than the other.

As Jack pointed out, the comments should help, but entrants are going to look at the number as an indicator of the severity of any flaws the judges perceive.
I set out running but I take my time.
User avatar
jeffjm
 
Posts: 383
Joined: Sun Feb 26, 2012 5:16 pm
Location: Crestwood


Return to ...and Everything Else

Who is online

Users browsing this forum: No registered users and 2 guests

cron