The ultrasonic flowmeter which is described in this paper, measures the transit of time of an ultrasonic pulse. This device consists of two ultrasonic transducers and a high resolution time interval measurement module. An ultrasonic transducer emits a characteristic wave packet (transmit mode). When the transducer is in receive mode, a characteristic wave packet is formed and it is connected to the time interval measurement module inputs. The time interval measurement module allows registration of transit time differences of a few pulses in the packet. In practice, during a single measuring cycle a few time-stamps are registered. Moreover, the measurement process is also synchronous and, by applying the statistics, the time interval measurement uncertainty improves even in a single measurement. In this article, besides a detailed discussion on the principle of operation of the ultrasonic flowmeter implemented in the FPGA structure, also the test results are presented and discussed
A new time interval/frequency generator with a jitter below 5 ps is described. The time interval generation mechanism is based on a phase shifting method with the use of a precise DDS synthesizer. The output pulses are produced in a Spartan-6 FPGA device, manufactured by Xilinx in 45 nm CMOS technology. Thorough tests of the phase shifting in a selected synthesizer are performed. The time interval resolution as low as 0.3 ps is achieved. However, the final resolution is limited to 500 ps to maximize precision. The designed device can be used as a source of high precision reference time intervals or a highly stable square wave signal of frequency up to 50 MHz.
The designing process of high resolution time interval measurement systems creates many problems that need to be eliminated. The problems are: the latch error, the nonlinearity conversion, the different duty cycle coefficient of the clock signal, and the clock signal jitter. Factors listed above affect the result of measurement. The FPGA (Field Programmable Gate Array) structure also imposes some restrictions, especially when a tapped delay line is constructed. The article describes the high resolution time-to-digital converter, implemented in a FPGA structure, and the types of errors that appear there. The method of minimization and processing of data to reduce the influence of errors on the measurement is also described.
Time-interleaved analog-to-digital converter (ADC) architecture is crucial to increase the maximum sample rate. However, offset mismatch, gain mismatch, and timing error between time-interleaved channels degrade the performance of time-interleaved ADCs. This paper focuses on the gain mismatch and timing error. Techniques based on Discrete Fourier Transform (DFT) for estimating and correcting gain mismatch and timing error in an M-channel ADC are depicted. Numerical simulations are used to verify the proposed estimation and correction algorithm.
This work shows a time-domain method for the discrimination and digitization of parameters of voltage pulses coming from optical detectors, taking into account the presence of electronic noise and afterpulsing. Our scheme is based on an FPGA-based time-to-digital converter as well as an adjustable-threshold comparator complemented with commercial elements. Here, the design, implementation and optimization of a multiphase TDC using delay lines shorter than a single clock period is also described. The performance of this signal processing system is discussed through the results from the statistical code density test, statistical distributions of measurements and information gathered from an optical detector. Unlike dual voltage threshold discriminators or constant-fraction discriminators, the proposed method uses amplitude and time information to define an adjustable discrimination window that enables the acquisition of spectra.
This paper provides an overview of the effects of timing jitter in audio sampling analog-to-digital converters (ADCs), i.e. PCM (conventional or Nyquist sampling) ADCs and sigma-delta (ΣΔ) ADCs. Jitter in a digital audio is often defined as short-term fluctuations of the sampling instants of a digital signal from their ideal positions in time. The influence of the jitter increases particularly with the improvements in both resolution and sampling rate of today's audio ADCs. At higher frequencies of the input signals the sampling jitter becomes a dominant factor in limiting the ADCs performance in terms of signal-to-noise ratio (SNR) and dynamic range (DR).