MEASUREMENT TECHNIQUES:
Analogue vs. Digital
 


The Basic Difference

The primary difference between analogue and digital is analogue is the physical reaction of an input signal. E.g. the greater the signal the greater the pull on a coil within a magnet, the coil being attached to a needle indicating, on a dial, the value of this pull.

With digital the input is taken through a process called "analogue to digital conversion" whereby the absolute magnitude of a signal is given a numeric value. This is presented to other electronics, usually a microprocessor, in a digital format, often in binary.

Once in binary format, the digital electronics makes this available to the user in a number of ways, the typical being the Liquid Crystal Display or LCD. It is, however, not uncommon to also find such displays being moving coil meters. This may appear a little strange to have a digital process between two analogue points being the input and the display, but this is done when certain calculations are required between the input and display, e.g. Square Rooting, RMS, etc. Such calculations are extremely difficult to process with analogue electronics.



Understanding Resolution

As can be seen by the above explanation, the process of Analogue to Digital Conversion turns the level of an analogue signal into a digital 'number'. With digital electronics, numbers transferred from one section to another depend mainly on the number of "bits" available (a 'bit' is a binary digit). Not long ago this was limited to 8 bits which meant a maximum range of 0 to 255. The full scale deflection was limited to this range and meant the whole scale was thus split into 256 segments in much the same manner as an analogue meter's dial.

As an example we'll refer to a hypothetical digital system to measure mains voltage and it is given a FSD of 255 Vrms (yes, this was deliberate, makes the explaining easier). This would mean that each segment would represent one volt. When a human uses an analogue meter to read a voltage and the needle falls between two markings on the meter, the user has the capability to create and read subdivisions and therefore assume a near reading. Digital electronics cannot do this. An input whose value is within any one segment will be given the value of that segment. Digital electronics cannot "take a reading between the markings".

It is this limitation, the fact the digital system cannot determine or resolve how far the 'needle" is away from the 'markings', that is given the term "resolution". Should a voltage be 240.9 Vrms the digital system will still read this as 240 Vrms yet an analogue meter would clearly indicate very close to 241 Vrms.


How This Affects Accuracy:

Is resolution a problem? At about 230 Vrms our example system, only being able to resolve to 1V, would deem the reading to be within 0.44%. Now for most part this would be sufficient.

It is when the system is wired to a e.g. 110V secondary of an 11kV transformer that this problem makes itself more known. As the system can only resolve the input down to 1V the resolution has dropped to 1V in 110, or 0.91%. This problem is even more exaggerated when trying to measure lower voltages such as 110V 3-phase delta windings which translates into a 63V phase-neutral voltage. This would make the resolution only 1V in 63V, or 1.58%. Note we have not used the "accuracy". This would infer the 'markings' were not correct. The nett result, however, does limit the certainty of the reading. The reading could be very close to the next volt, yet the reader cannot know this.

It can now be seen how important it is to operate the input of an analyser within an acceptable range of the input's capability. Let's take it to the absolute extreme. Should we operate at the very bottom end of the scale e.g. a 2V signal on the 255V input, the digital process can only read 0, 1, or 2. Having only 3 steps, how accurately can the purity of the waveform be determined? Furthermore, with this limitation can you imagine what the Fourier analysis would do to the harmonic content?

Modern digital systems are much improved and it is not uncommon to find resolutions of at least 1 in 1000 of FSD (i.e. 0.1%). The Telog Linecorder range boasts this while the Reliable Power Meters' range boasting higher than 1 in 8000 of FSD. Translated this means 0.0125% resolution on FSD, i.e. 0.1V on 700V FSD. If operated away from the lower portion of the input range (>10%), uncertainty is usually kept low enough so as to not be concerned about it.

There is one other aspect of resolution that tends to throw a few engineers and that is the resolution is also affected by any multiplier values. In our example above the Reliable Power Meters effectively has a resolution of approximately 86.5mV. Therefore one would see all values, including related readings, are in steps of 86.5mV. At 110Vrms the resolution accuracy is 0.078%.

When monitoring very high voltages via rather large ratio VTs i.e. 33kV to 110V (ratio 300:1) things take on a different look. It only takes a slight bit of noise to cause a value to be seen internally as 86.5mV, but when the multiplier is added to the equation voltages of near 26V are reported. What is often forgotten (and this is where the mistake happens) is the 26V is not relative to the 110V, but the original 33kV. The percentage has never changed.



Understanding Input (Full Scale) Accuracy

Before the signal is converted into a digital format by the analogue to digital convertor, there are a number of analogue processes the signal goes through before arriving at this stage. These are voltage dividers and input amplifiers, each with possible inaccuracies that affect the true value of the signal.

The analogue to digital convertor too has a 'reference' injected into it by which it measures the incoming signal. As this is also an analogue value (usually a few volts) this too has an affect on how accurately the convertor changes the input into a digital value. The analogy would be how strong the magnet is in the moving coil meter.

Although the input signal is converted into a number, the full scale deflection accuracy is no different than for an analogue meter. This is simply the full scale voltage divided by the actual input voltage that causes a full scale deflection. Please note the 'direction' of the ratio, even test houses get this one wrong! Said a different way (and using analogue meter analogy), the input is adjusted until the 'needle' sits exactly above the final 'mark' on the 'dial'.


How This Affects Accuracy:

In our example; Should the digital system require 250V input to reach the point where is just deems the input to be 255V (i.e. FSD) then the accuracy of the input is +2%, i.e. the input reads 2% high at this point. Should the input require 252.48V to reach this point, then the input is +1% accurate at full scale.

In digital systems this accuracy tends to be the same all the way through the range (we discuss this aspect in the next section). This means, taking our rather inaccurate 2% error, should the input be exactly 100V our system will read 102V.



Understanding Linearity

This is simply how accurately the steps are repeated or, in other words, how evenly are the markings spread on the 'face' of our digital input 'meter'. In our example this would be how accurately the analogue to digital convertor can determine the start of the next volt.

This is, to all intents and purposes, a throwback to the analogue meter days where a meter could comfortably be more sensitive to change at one point on the dial than another. This could mean, for example, the meter could read 100V at the centre of a 200V scale with 100V input, but may only read 195V with 200V input because of, say, inaccuracies in manufacture of the circular magnet around the coil.

In digital systems these variations from point to point will be exceedingly small. If a large non-linearity were present this could falsely claim harmonic content being present when in reality there is none. On modern digital recorders such non-linearity is almost non-existent and may be ignored.



Understanding Conversion Accuracy

Although the equivalent of 'analogue non-linearity' can be ignored in digital systems, there is a digital version of this called "conversion accuracy". Without going into the deep dark depths of explaining how analogue to digital conversion works, this inaccuracy is induced through a number of factors with the prime one being noise.

An analogy would be someone trying to distract you while trying to accurately measure a mile with a 12-inch ruler. While moving the ruler from one point to the next it is possible to 'slip' and not place the start of the ruler exactly where the previous measurement ended. The only difference is the convertor always manages to count exactly how many times it placed the ruler end upon end.

The main issue with noise is that it could cause the input to just make it to a higher segment on one sample, but not on the next. Because of this 'jitter', all analogue to digital conversion accuracy specifications must include the term "±1 count".


How This Affects Accuracy:

One count is equal to the resolution voltage. This voltage must be accepted as a possible deviation from the input voltage. In our example this is 1V. At 200V input this could mean a ±0.5% error, at 100V this rises to ±1%.

In some instrumentation the designers use "log-law" analogue to digital convertors in an attempt to overcome the problems of resolution affecting low-end readings. The convertor takes smaller 'steps' at the low end of the input range and increases the step size higher up. This maintains an overall conversion accuracy, but at the expense of resolution accuracy at the higher end of the input range. Although the resolution is worse at the higher end of the input range, the overall accuracy is not impacted as the resolution ratio to input voltage does not increase.

This technique is still in use today in "cheap and nasty" voltage monitors as the analogue to digital convertor can be a less expensive one with minimum bits.

Modern power recorders tend to have linear convertors but with a high bit number. Examples are Telog Linecorders with resolutions of 0.3V and 0.6V, depending on model. Reliable Power Meters boast better than 100mV resolution.



Determining Overall Accuracy

"The crux of the matter!"

In determining the overall accuracy of the instrument, one needs to take the input accuracy X linearity error X conversion accuracy. Input accuracy can be determined, linearity in modern systems can be ignored, and conversion accuracy is simply ±1 count or resolution voltage.

If we use our rather inferior example system above which had an input error of 2% and a resolution of 1V (remember, we're ignoring linearity), then our accuracy can be deemed no better than 2% ±1V. At a voltage near half range e.g. 120V, this could mean we would read 116.6 to 123.4 but as our system only resolves to 1V this is modified to 116 to 123.

Modern systems are a lot better than this and usually boast, depending on their application, as high as 0.1% ±0.1V.

Sample and Recording Periods Explained  >>


| | Ask a Question |

© 01.11.01