Sample and Recording
Rates, Intervals, & Periods
If there is one confusing aspect of recording and logging devices it is the meaning of "sampling" and "recording" periods. To add to this confusion there is another term known as the "sampling frequency or rate". The following are the accepted terms although not all manufacturers of logging devices use them in the correct format.
SAMPLING FREQUENCY or RATE
Also known as the "Analogue to Digital Conversion Rate" meaning how often the analogue signal is turned into a digital value. Simply put, this is how often the digital circuitry looks at the incoming signal. Known figures for the Telog Linecorder is "16 samples per half-cycle" meaning the microprocessor converts the absolute magnitude of the incoming signal into a digital value 16 times in one half-cycle.
In semi-complex systems the least that is determined from this is the absolute RMS value of the half-cycle. This is usually then compared to minimum and maximum values and new values stored if either of the two require adjustment. In more complex systems these values are analysed by the microprocessor to create further values such as the peak of the cycle, distortion levels, disturbances within a cycle, etc. Complex values may take many cycles to ascertain a specific value. Once determined, these are made available for logging.
Not to be confused with Sampling Frequency or Rate. This is the time between successive readings of all of the values of the input signal as determined during this period by the samples taken. Another way of viewing this is: this is the rate at which the logging process looks at the values as determined by the analysis done in the previous step. Usually the minimum interval is one second and values would normally be written away to a temporary memory.
This is how often the values in the sampling period are written away to the non-volatile memory mechanism being battery backed RAM or a Hard Drive. The values written away are often the minimum and maximum during, and the average for, the recording interval.
Note: In some systems the sampling interval and recording interval may be one and the same thing thus combining the two processes into one. The reason it is usually split is it reduces temporary memory requirements, but in more modern systems this may well prove to not be a problem and program speed is achieved at the expense of using more of the available memory.
This is the total time span of the collected data. It will often be pre-determined, owing to memory constraints, by the recording interval. A shorter recording interval will mean the memory (either solid state or hard disk) will fill up quicker. Depending on the type of fault, it often presents the user with a trade off situation wanting resolution while wanting to also record over a long period.
As an example:
The input signal is sampled at 16 times in one half-cycle. The microprocessor analyses this and determines a mean RMS value for the input, and stores this as well as adjusting any minimum or maximum read values. These are then made available for logging.
Once a second the RMS, minimum, and maximum values are then read and written away to a temporary store. No calculations are really done during this time except the programmer may decide to now do the average calculation for the past sample interval.
Once a minute the values are then checked to see which is the lowest and this taken as the minimum, the highest taken as the maximum, and the average of the 60 readings taken during the minute is calculated. These three values are then written away to the non-volatile memory.
When the memory system is filled up, the data either stops being recorded or starts overwriting the oldest data recorded. The span of time from the oldest to the newest data is the recording period.
Using whats in the PQ bag>>