The History of Barcode Verification: From Pogo Stick to Three Legged Stool

 In 101

For most of its history, barcode quality has been thought of as nothing more than print quality which we have compared to legibility: can the scanner read the barcode? Barcode verification meant evaluating the print quality of the barcode and the earliest form of verification was based on measuring bar and space widths. This proved to be an unreliable method for determining whether a barcode could be successfully scanned because measuring bar and space widths was not how scanners work, so verification evolved into evaluating and grading the reflective differences between bars and spaces. To this day, reflectivity is basis for barcode verification as defined in the ISO standards for printed barcodes.

Starting back in the late 1970’s the few verifiers that were available scanned and evaluated only the print quality a barcode. It was assumed that the user would know how the barcode should be structured: for example, that an encoded number 3 on the left side of a UPC symbol was a different configuration of bars and spaces than a number 3 on the right side. It was also assumed that the user would know that the check digit wasn’t just the last of a user-assigned string of numbers.

Barcode Quality Evolved Into More Than Print Quality

These assumptions grew to become a barcode quality issue in their own right, as barcoding was adopted into more and more industries, each of which needed to make sure their barcodes would not be mistaken for—or accidentally infiltrated by—barcodes from some unrelated industry. Barcoded data began to have prefixes to identify where specific packets of information began: an auto industry barcode would begin with a different character set than an airline industry barcode, a date of manufacture would start with a different character set than a sell-by or expiration date. Barcode quality went from being a one trick pogo stick to a balancing act of very different attributes, either of which could render the barcode useless.

Here’s Why Someone Might Hope a Barcode Wouldn’t Scan

Implicit but unspoken throughout this evolution is the important of making sure the barcode data matches the product it marks. From a retail perspective, the whole point of the barcode is to identify a specific item: monitor its arrival, presence and eventual departure in order to track sales performance and drive inventory replenishment. In non-retail environments, the barcode may fulfill other uses, but it is still important that the barcode on the sub-assembly or item or carton or pallet encode the correct numbers. If you have ever wondered if there could be a circumstance in which someone would wish a barcode would fail to scan, this would be it. Unfortunately we have never run across such a circumstance—incorrectly encoded barcodes seem to scan perfectly and wreak all kinds of havoc in supply chains and inventory systems. Some—but not all–barcode verifiers also have the ability to confirm the identity of the item associated with the barcode, as well as grade print quality and confirm data structure. This third leg is called the product lookup function and it looks like this.

There is actually a fourth attribute that the most sophisticated barcode verifiers can check: data match for the human-readable interpretation and the encoded information. This involves OCR recognition (sorry about the redundancy) of the readable characters below a linear barcode and confirming that they match the characters represented in scanner-readable form in the bars and spaces.  While this may seem an excessive precaution, a mismatch is not impossible—we have samples in our lab files that demonstrate it.

If you would like more information or have a barcode quality question or comment, please click on the Contact Us tab at the top right of your display—and thank you.


Recent Posts

Leave a Comment

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt