If people dislike CAMRA’s Champion Beer of Britain model, what other approaches might there be to judging the best beers?
Off the back of our post earlier this week, various people elaborated on their objections to the CAMRA approach:
“…from a consumer perspective, which is where CAMRA should be coming from, it’s misleading for the brewery to market a beer based on winning CBOB if that’s not what you get in your glass.” (James)
“Festival tasting isn’t the same as in a pub. A few mouthfuls of a beer aren’t enough, especially if you’ve drunk something completely different previously.” (Steve)
“As a GBBF staffer I was able to get myself a taster of Abbot from the actual cask that was judged… the idea that it’s the second best beer in the land is just ludicrous.” (Ben)
“…it’s unsustainable to get volunteers to judge these awards throughout the industry.” (Anonymous beer writer)
To summarise these objections, and others:
- the beer at judging isn’t the same beer consumers encounter
- the process is opaque leaving space for suspicion
- the results are ‘wrong’, proving that the process must be, too
- the judges aren’t qualified
What other models are there?
When we asked about this via Patreon, John ‘The Beer Nut’ Duffy said:
“I’ve been running one for several years on a drinkers’ choice basis: get everyone who’s interested to name their top three beers of the last year and award points accordingly.”
We can see pros and cons here. On the upside, you’ll be likely to get a more interesting list of candidates, based on people’s actual experiences in the real world, throughout the year.
One downside might be, again, a tendency to reward more mainstream, readily-available beers from breweries with better distribution.
Also, “anyone who’s interested” rings alarm bells for us. Whose opinions do you miss? And to whose views do you end up giving undue weight?
As it happens, this is more or less how the first round of voting in CBOB works: branches send out emails inviting people to pick their favourite beers from a big old list of local candidates.
You might also take Ratebeer or Untappd awards or lists as an extreme example of “anyone who’s interested” and, as we know, those are not uncontroversial either. You might summarise the industry response to Untappd as “Get back in your box, plebs!”
Critics’ choice
Another related approach might be the The Sight & Sound model.
For the most recent round of its best-films-of-all-time poll, the BFI’s film magazine asked around 1,600 professional film critics, filmmakers and other industry types to nominate 10 films each.
The results were hugely controversial.
Some felt too many ‘obscure’ films made the top 100, or too many political choices. Others complained about its continued bias towards films by white men from Britain and America.
And people also asked: “Who chooses the choosers?”
A poll like this feels somewhat objective but, at some point, someone has to pick the people who do the picking. Is this where bias creeps in?
This is more-or-less how most trad beer awards work, and the same criticism applies.
The difference being that because most beer judging requires you to be on site, rather than sending a quick email, the pool of critics is further reduced.
Many discerning palates are cut out of the process because they can’t afford to cover travel costs, or work for free. So the choosers are chosen by… chance? Personal circumstances? Less than ideal.
The Eurovision model
What if we combine a popular vote with the view of expert judges? That’s how it works at the Eurovision song contest these days.
A jury in each country, made up of singers, musicians and songwriters, awards points to each song, based on the semi-finals earlier in the week.
That is then combined with a public vote on the big night.
In theory, this smooths out both the elitist tendencies of the jury (you can’t trust these so-called ‘experts’!) and the mischievous tendencies of the public, who often vote based on national allegiances and/or novelty value.
But guess what? People still get angry about the results of Eurovision. They still feel the wrong country won, and that their country was robbed.
If anything, this system is the worst of both worlds: “We had the right result until they added the numbers from those amateurs/snobs, at which point it veered off course!”
You can’t please everyone
We cannot imagine a system for judging the best beer that won’t cause controversy.
That is half the point of awards, though – to generate conversation and make people think about beer.
Who is seriously using these announcements to decide which beer to drink, or not?
It’s possible, we suppose, that if you got a choice between two similar beers, you might pick the one with a little CBOB medal on its pumpclip.
Or that the first time you see Abbot in a pub after it’s won an award you’re tempted to give it a try.
But, really, they’re just a bit of fun.
Transparency helps
Having said all of that, being really clear about the process is one way to earn people’s trust.
Pete Brown addressed this in response to criticism of the British Guild of Beer Writers’ Awards last year, which we thought was a smart move.
Exposing your process also allows people to highlight areas for improvement, if you really want to hear those suggestions.
The best you can hope for is that people say, “I don’t like the result, but I don’t doubt it was fair.”
CAMRA’s process is not secret, even if it feels a bit obscure. It’s outlined here, in a PDF, described as an internal memo.
How would you improve that process, in practice? Where are its points of weakness?