by Paul Keller, CQE, CQA
Maintaining an effective Calibration Control System is a requirement of ISO 9000 and other Quality Standards. ISO 9001 states that measurement equipment "shall be used in a manner which ensures that the measurement uncertainty is known and is consistent with the required measurement capability."
The "required measurement capability" can be considered a fit for use criteria. The resolution of a gage (i.e. the graduations on the scale) might seem to indicate the capability of a gage. For example, a tape measure that indicates length measurements to the nearest 1/32" (.03125") would obviously not be capable for measuring a dimension with a tolerance of +- .001. But what about a gage with a .0005 resolution?
The answer is dependent on the measurement uncertainty, so resolution alone is not a sufficient guideline for calibration control systems. Measurement System error (or uncertainty) may be due to numerous potential sources, including the methodology for sample measurement, the person(s) conducting the measurements, the environment, and the equipment. A properly designed Calibration Control System seeks to quantify (i.e. "uncertainty is known") the errors, and minimize those that are significant. We usually classify the uncertainty in the following terms: Accuracy, Repeatability, Reproducibility, Linearity, and Stability. This classification makes it easier to determine root causes of the uncertainty, which leads to corrective actions resulting in a decrease in measurement uncertainty.
Consider the first of these terms: Accuracy. Accuracy, also known as bias, is typically estimated by performing a Calibration Study. In a Calibration Study, we measure a standard (for example, a gage block) using the equipment to be calibrated. Accuracy refers to the deviation of the measurement taken by the equipment under study from the known (true) value of the standard. The true value of the standard is obtained from its calibration. Thus, a Calibration System must use standards that are in turn calibrated to higher standards, which ultimately are calibrated relative to Primary Standards. The National Institute of Standards (NIST, formerly the National Bureau of Standards (NBS)) physically maintains these Primary Standards. Secondary Standards, issued by NIST to private companies, are directly traceable to a Primary Standard.
The Calibration Study itself is subject to measurement errors. A defined methodology for conducting the study will help reduce variation in calibration control systems, as will a controlled environment and trained personnel. Yet there can still be variation due to these sources, as well as due to the inherent inability of the equipment to provide consistent values on a repeatable basis. (Here, the equipment variation does not only include that due to the gage itself, but also to other equipment included in the measurement system, such as fixturing). If we perform a Calibration Study to measure Accuracy using a single measurement, we ignore this error. If we adjust the gage to compensate for the apparent drift of the gage (since the last Calibration), we risk tampering with the measurement system, which could actually degrade the measurement system Accuracy. For this reason it is usually best to take repeat readings during the Calibration Study, being sure to completely re-initialize the measurement system, including re-setup of the fixturing, between measurements.