Categories
opinion

What would a perfect beer awards process look like?

If people dislike CAMRA’s Champion Beer of Britain model, what other approaches might there be to judging the best beers?

Off the back of our post earlier this week, various people elaborated on their objections to the CAMRA approach:

“…from a consumer perspective, which is where CAMRA should be coming from, it’s misleading for the brewery to market a beer based on winning CBOB if that’s not what you get in your glass.” (James)

“Festival tasting isn’t the same as in a pub. A few mouthfuls of a beer aren’t enough, especially if you’ve drunk something completely different previously.” (Steve)

“As a GBBF staffer I was able to get myself a taster of Abbot from the actual cask that was judged… the idea that it’s the second best beer in the land is just ludicrous.” (Ben)

“…it’s unsustainable to get volunteers to judge these awards throughout the industry.” (Anonymous beer writer)

To summarise these objections, and others:

  • the beer at judging isn’t the same beer consumers encounter
  • the process is opaque leaving space for suspicion
  • the results are ‘wrong’, proving that the process must be, too
  • the judges aren’t qualified

What other models are there?

When we asked about this via Patreon, John ‘The Beer Nut’ Duffy said:

“I’ve been running one for several years on a drinkers’ choice basis: get everyone who’s interested to name their top three beers of the last year and award points accordingly.”

We can see pros and cons here. On the upside, you’ll be likely to get a more interesting list of candidates, based on people’s actual experiences in the real world, throughout the year.

One downside might be, again, a tendency to reward more mainstream, readily-available beers from breweries with better distribution.

Also, “anyone who’s interested” rings alarm bells for us. Whose opinions do you miss? And to whose views do you end up giving undue weight?

As it happens, this is more or less how the first round of voting in CBOB works: branches send out emails inviting people to pick their favourite beers from a big old list of local candidates.

You might also take Ratebeer or Untappd awards or lists as an extreme example of “anyone who’s interested” and, as we know, those are not uncontroversial either. You might summarise the industry response to Untappd as “Get back in your box, plebs!”

Critics’ choice

Another related approach might be the The Sight & Sound model.

For the most recent round of its best-films-of-all-time poll, the BFI’s film magazine asked around 1,600 professional film critics, filmmakers and other industry types to nominate 10 films each.

The results were hugely controversial.

Some felt too many ‘obscure’ films made the top 100, or too many political choices. Others complained about its continued bias towards films by white men from Britain and America.

And people also asked: “Who chooses the choosers?”

A poll like this feels somewhat objective but, at some point, someone has to pick the people who do the picking. Is this where bias creeps in?

This is more-or-less how most trad beer awards work, and the same criticism applies.

The difference being that because most beer judging requires you to be on site, rather than sending a quick email, the pool of critics is further reduced.

Many discerning palates are cut out of the process because they can’t afford to cover travel costs, or work for free. So the choosers are chosen by… chance? Personal circumstances? Less than ideal.

The Eurovision model

What if we combine a popular vote with the view of expert judges? That’s how it works at the Eurovision song contest these days.

A jury in each country, made up of singers, musicians and songwriters, awards points to each song, based on the semi-finals earlier in the week.

That is then combined with a public vote on the big night.

In theory, this smooths out both the elitist tendencies of the jury (you can’t trust these so-called ‘experts’!) and the mischievous tendencies of the public, who often vote based on national allegiances and/or novelty value.

But guess what? People still get angry about the results of Eurovision. They still feel the wrong country won, and that their country was robbed.

If anything, this system is the worst of both worlds: “We had the right result until they added the numbers from those amateurs/snobs, at which point it veered off course!”

You can’t please everyone

We cannot imagine a system for judging the best beer that won’t cause controversy.

That is half the point of awards, though – to generate conversation and make people think about beer.

Who is seriously using these announcements to decide which beer to drink, or not?

It’s possible, we suppose, that if you got a choice between two similar beers, you might pick the one with a little CBOB medal on its pumpclip.

Or that the first time you see Abbot in a pub after it’s won an award you’re tempted to give it a try.

But, really, they’re just a bit of fun.

Transparency helps

Having said all of that, being really clear about the process is one way to earn people’s trust.

Pete Brown addressed this in response to criticism of the British Guild of Beer Writers’ Awards last year, which we thought was a smart move.

Exposing your process also allows people to highlight areas for improvement, if you really want to hear those suggestions.

The best you can hope for is that people say, “I don’t like the result, but I don’t doubt it was fair.”

CAMRA’s process is not secret, even if it feels a bit obscure. It’s outlined here, in a PDF, described as an internal memo.

How would you improve that process, in practice? Where are its points of weakness?

Categories
homebrewing opinion

On Judging

London Amateur Brewers competition.

When the London Amateur Brewers (LAB) asked if we would be interested in joining the judges for their recent regional home brewing competition, we jumped at the chance.

Like most people, we’ve observed the results of judged competitions with bemusement in the past: “How did that win!?” Although we knew it wouldn’t be on the scale of CAMRA’s Champion Beer of Britain we thought it might give us some insight into how such decisions get made.

Arriving at Beavertown’s new Tottenham Hale brewery at 8:45 am on a Saturday, the first thing that struck us was how reliant it all is on good will — on unpaid volunteers with the necessary skills giving up half their weekend. (Our only qualification for judging beer is enthusiasm which is why we were each paired with someone more experienced who had achieved some level of BJCP accreditation.)

Ah, yes — the BJCP. Judging a competition with something like 150 entries without using some kind of rules, however arbitrary, would be impossible. The BJCP system breaks those entries down into categories and provides rigid style definitions: they say Cream Ale ought to have ‘sparkling clarity’, for example, so even if your attempt tastes incredible, it will lose points if it is hazy and has lumps floating in it.

In fact, some beers which didn’t really taste all that great scored reasonably well because they were clear, correctly conditioned, and formed and retained a decent head. Flavour isn’t everything, at least not in this context.

The emphasis on ‘style’ works reasonably well with fairly common types of beer such as bitter or IPA. The judges had lots of experience drinking commercial examples and knew what to expect. ‘California Common’, on the other hand, presents more of a challenge, as does ‘Festbier’, only a handful of which are ever found on sale in the UK, and usually past their best — judges were largely reliant on the written descriptions from the BJCP to decide if a beer was ‘correct’.

There was also a sense that some entrants struggled to find a category for their beer and, shoe-horning it into the closest fit, found themselves docked points for not being ‘true to style’. Others seemed to have attempted one style but accidentally brewed another (e.g. it had gone sour, or turned out darker than expected) and were trying it on.

All in all, the BJCP system rewards conformity and ‘cloning’ while punishing creativity.

Now, let’s be clear: the judges try their absolute best, within the rules they’ve signed up to, to make sure great beers get the recognition they deserve. There was lots of agonising and soul-searching, with arbitration from stewards and senior judges, and we heard variations on “I’d drink that all day if I could, but it’s not a [STYLE]” throughout the day. But final decisions in each category were made by re-tasting the top scorers and ranking them without close reference to style guidelines: which one really was best?

The toughest part, arguably, is completing feedback forms for each beer. If the comments aren’t honest, then what’s the point? But if they’re too blunt, then they might be discouraging, and the LAB don’t want to put people off brewing or competing in future. Generally, it was possible to say, even where a beer scored poorly, that it was a problem with ‘trueness to style’ rather than poor brewing technique. There was, however, no nice way to say, “This beer made me retch and forced me to spit it into a bucket.”

Some good beers no doubt got knocked out, just as some good teams have been knocked out of the World Cup, but those that made it to the final judging table for ‘Best of Show’ were undoubtedly deserving. Here, a couple of category winners simply didn’t stand up to scrutiny: the best saison was better than the best lager, in absolute terms, for example. The overall winner, after much impassioned argument and careful consideration, was a flawless black IPA with commercial potential.

On balance, we might enter a BJCP-rules competition if we thought we’d brewed a particularly good example of a standard style, but not if we’d brewed anything remotely unusual.

And, from now on, we’ll certainly have a little more sympathy with the judges at national and international competitions.