Accuracy, resolution, and precision are easily confused parameters used to describe the performance a system is capable of. While working on a recent temperature monitoring network using digital temperature sensors made by Texas Instruments, I had several choices on how much accuracy I wanted the system to have. Upon further reading I had realized I could control the accuracy of the system to some extent, it was resolution that was out of my control. Resolution and accuracy seemed synonymous at first glance, but are actually very different things.
Resolution is simply the smallest change that can be measured. In the case of my temperature sensors, their resolution would be how small of an increment of temperature change can be detected by the sensor. When choosing a temperature sensor for an application, the needed resolution would need to be considered when selecting the actual component. In the case of simple ambient air temperature monitoring, ±.5°C would be more than adequate, but most modern sensors can far exceed this specification.
Looking at the TMP100 temperature sensors datasheet, accuracy and resolution are specified:
±2.0°C from -25°C to +85°C
±3.0°C from -55°C to +125°C
9 to 12-Bits, User-Selectable
There are two selectable temperature ranges for this specific temperature sensor, the narrower range of -25°C to +85°C has an accuracy of ±2.0°C, while the wider range decreases in accuracy to ±3.0°C. Note these are max rated values, or absolute worst cases. The datasheet also states typical values of ±.5°C and ±1.0°C respectably (I’ll go into more about accuracy shortly).
Resolution is also a selectable parameter on this sensor that would be based on the application of the intended temperature monitor this sensor would be used in. If you were to choose the minimum 9-bit resolution, you would have 512 sample points across the devices usable temperature range. Selecting the 12-bit resolution would then increase to 4096 sample points. Calculating what these bit-sized samples of resolution mean to real world temperatures can be calculated by dividing the full temperature range by the number of possible samples over that range. When looking a the wider -55°C to +125°C range, at 9 bits you have a resolution of .35°C and at 12 bit your resolution is .04°C.
While it may make sense to just use the higher 12-bit resolution regardless of the application, there are other factors such as conversion time and available memory in your device that would make the lower 9-bit resolution a more practical solution. In the case of temperature monitoring, I went with a compromised 10-bit resolution resulting in a fast conversion time and resolution of .17°C.
Precision is the repeatability of a sample or process to be the same value when measured or sampled many consecutive times reliably. An example of this could be demonstrated by use of a weight scale with high resolution. If an object of fixed weight were to be placed on it and external environmental factors such as wind and vibration were to be isolated from the scale, multiple samples of this items weight should be identical in value. This would show the scale has high precision as every sampled value would be the same (or very close to it). In the case of the temperature sensor, if an object were to be kept at a constant temperature and the temperature sensor were to be placed on it with other environmental factors kept at a constant (such as air temperature), multiple readings would be the same also indicating high precision. Note that when looking at a systems precision, resolution must still always be observed. If the scales resolution was ±100 ounces, any object you placed on it that weighed between 50 and 150 ounces would result with a weight of 100 ounces reliably. So while the scale may seem precise as it repeatably and reliably shows the same result of 100 ounces, in actuality it is precise only to its stated resolution.
Regarding accuracy of the scale mentioned previously: Just because it has high resolution and proves precision by repeatability of samples reliably, there is no guarantee that it is accurate. This is where calibration comes into the scenario. Since accuracy can be thought of as the degree of true closeness of a sampled value to its actual value, accuracy of a sampled system can be poor while resolution and precision can be high.
Poor accuracy in a system can be corrected by calibration. In the case of the above high resolution scale, calibration by measuring an object of a known standard fixed weight and correcting the offset of the actual and measured values can be used to increase a systems accuracy. Accuracy can also vary within its environment. Looking back at the temperature sensor as an example, an accuracy vs temperature graph is defined which shows that over its usable temperature range, accuracy decreases as temperature increases. This can of course also be compensated via calibration. The same holds true for the scale, several calibrations of different known weights through the scales usable range would allow you to calibrate and correct a non-linear sensor response. This would keep the calibration accurate throughout the entire usable range avoiding poor accuracy at the sensors extremes.Posted in: General, Nexcess