Uncertainty in Measurement

This set of terms mean very different things in a chemistry class compared to what they mean in casual conversation. In chemistry class, ‘error’ is not the same as ‘mistake’ and ‘uncertainty’ is not the same as ‘ambiguity.’


Systematic and Random Error

In any measurement, chaos, instrumental limitations, and human failings prevent perfection. This lack of perfection in measurements is called error. Some errors are random errors, meaning that they cause some of the measurements to be a little high and some to be a little low. These are generally due to chaos, and, while they can be minimized, they can never be eliminated. Systematic errors are ones that cause readings to be always high or always low. These are generally due to mistakes, equipment malfunction, and poor laboratory technique. For example, reading the volume of a graduated cylinder from the top of the meniscus rather than the bottom will cause all of the volume measurements to be slightly high.

Students in chemistry lab are warned to use the same balance for all the mass measurements for an experiment. What kind of error might occur if a student uses several different balances during the course of a single experiment?

All balances are sensitive to their environments. Dozens of factors contribute to the accuracy of a balance: minor things like air currents and the slope of the countertop and major things like spilled chemicals on the pan. Any given balance will read slightly high or slightly low on any given day. However, these differences tend to be fairly repeatable. If a student uses the same balance throughout an experiment, the errors tend to "subtract out" when weighing by difference. If the student changes balances, some readings will by high and others low by random amounts. This will cause a random error.



Accuracy and Precision

Accuracy and precision are two indicators of the validity of measurements. Accuracy is the agreement of a measurement is with an actual value (how close it is to what it is supposed to be). This is the best indicator of how ‘good’ a measurement is. Unfortunately, if something is truly worth measuring, we don’t know its actual value. Accuracy is important in the chemistry lab during experiments in which the lab instructor knows what the value actually is and the students try to get a result as close as possible to this ‘true’ value. Scientists routinely check their instruments for accuracy by using them to measure a standard for which the values are well known. If the instrument’s readings match the accepted values for the standard, it is probably working correctly. Precision, on the other hand, is an indication of how close several measurements are to one another. Systematic errors tend to have an additive effect. If a student has poor laboratory techniques, his or her measurements will tend to fluctuate more than those of another student with good techniques. Laboratory instructors will tend to have students perform measurements several times then calculate an average with a standard deviation. If the instructor knows what the result should be, the student’s average indicates the accuracy and the standard deviation indicates the precision.

!Warning! I need to give students a word of warning regarding the terms accuracy and precision. In other disciplines, these terms have meanings very different from their meanings in chemistry.

What will happen to the precision and accuracy of the results for the student above who changes balances during an experiment?

e precision should be poor if a student randomly changes balances when repeating the same procedures. Suppose balance A reads high by 0.1 gram and balance B reads low by 0.1 gram. If the student uses balance A to do the first trial and balance B to do the second trial, the two results should be different by roughly 0.2 gram. We can't predict how this will affect the accuracy of the student's result because we don't know how many other errors he or she might have made and what impact those errors had. All of the errors could cancel each other out and lead to a perfect result, but most of us aren't that lucky.





Because all measurements have errors, scientists have developed methods of representing numbers such that they can indicate a degree of trustworthiness. Measurements have uncertainty (the lowest decimal place that can be measured with estimation).  If we use a ruler to measure length, we read the centimeters directly from the scale. We count the millimeters marks directly. We estimate the length to the tenth of a millimeter based on where it falls between marks, halfway is 0.5 mm, for instance. The uncertainty in that ruler is to the tenth of a millimeter. If we try to use that instrument to measure something very tiny, we can’t expect to get very good results. If we perform mathematical operations with the numbers that results from that measurement, we need to round our numbers so that they don’t imply less uncertainty than our instruments warrant. The need to clearly report the level of confidence we have in our measurements is the reason we assign significant figures to a measure and use those complicated rules to determine where to round a mathematical result.