Although the phenomenon of otoacoustic emission has been known for nearly 30 years, it has not been fully explained yet. One kind of otoacoustic emission is distortion product of the otoacoustic emission (DPOAE). New aspects of this phenomenon are constantly discovered and attempts are made to interpret correctly the obtained results. This paper discusses a new method of measuring DPOAE signals based on double phase-sensitive detection, which makes possible a real-time measurement of the DPOAE signal amplitude and phase. The method was applied for measurements of DPOAE signals in guinea pigs. Sample records are presented and the obtained results are discussed.
The paper presents a procedure for correction of the error of an ECG signal, introduced by the skin-electrode interface. This procedure involves three main measuring-calculating stages: parametrical identification of the mathematical model of the interface, realized directly before the diagnostic measurements, registration of the signal at the output of electrodes as well as reconstruction of the input signal of the interface.
The first two stages are realized in the on-line mode, whereas the operation of signal reconstruction presents a numerical task of digital signal processing and is realized in the off-line mode through deconvolution of the registered signal with the transfer function of the skin-electrode interface.
The aim of the paper is to discuss in detail the procedure of parametric identification of the skin-electrode interface with the use of a computer system equipped with a DAQ card and LabVIEW software. The algorithm for error correction introduced by this interface is also presented.
The main goal of this research study is focused on creating a method for loudness scaling based on categorical perception. Its main features, such as: way of testing, calibration procedure for securing reliable results, employing natural test stimuli, etc., are described in the paper and assessed against a procedure that uses 1/2-octave bands of noise (LGOB) for the loudness growth estimation. The Mann-Whitney U-test is employed to check whether the proposed method is statistically equivalent to LGOB. It is shown that loudness functions obtained in both methods are similar in the statistical context. Moreover, the band-filtered musical instrument signals are experienced as more pleasant than the narrow-band noise stimuli and the proposed test is performed in a shorter time. The method proposed may be incorporated into fitting hearing strategies or used for checking individual loudness growth functions and adapting them to the comfort level settings while listening to music.
The work presents the results of experimental study on the possibilities of determining the source of an ultrasonic signal in two-dimensional space (distance, horizontal angle). During the research the team used a self-constructed linear array of MEMS microphones. Knowledge in the field of sonar systems was utilized to analyse and design a location system based on a microphone array. Using the above mentioned transducers and broadband ultrasound sources allows a quantitative comparison of estimation of the location of an ultrasonic wave source with the use of broadband modulated signals (modelled on bats' echolocation signals) to be performed. During the laboratory research the team used various signal processing algorithms, which made it possible to select an optimal processing strategy, where the sending signal is known.
This paper presents the results of the theoretical and practical analysis of selected features of the function of conditional average value of the absolute value of delayed signal (CAAV). The results obtained with the CAAV method have been compared with the results obtained by method of cross correlation (CCF), which is often used at the measurements of random signal time delay. The paper is divided into five sections. The first is devoted to a short introduction to the subject of the paper. The model of measured stochastic signals is described in Section 2. The fundamentals of time delay estimation using CCF and CAAV are presented in Section 3. The standard deviations of both functions in their extreme points are evaluated and compared. The results of experimental investigations are discussed in Section 4. Computer simulations were used to evaluate the performance of the CAAV and CCF methods. The signal and the noise were Gaussian random variables, produced by a pseudorandom noise generator. The experimental standard deviations of both functions for the chosen signal to noise ratio (SNR) were obtained and compared. All simulation results were averaged for 1000 independent runs. It should be noted that the experimental results were close to the theoretical values. The conclusions and final remarks were included in Section 5. The authors conclude that the CAAV method described in this paper has less standard deviation in the extreme point than CCF and can be applied to time delay measurement of random signals.
Autocorrelation of signals and measurement data makes it difficult to estimate their statistical characteristics. However, the scope of usefulness of autocorrelation functions for statistical description of signal relation is narrowed down to linear processing models. The use of the conditional expected value opens new possibilities in the description of interdependence of stochastic signals for linear and non-linear models. It is described with relatively simple mathematical models with corresponding simple algorithms of their practical implementation.
The paper presents a practical model of exponential autocorrelation of measurement data and a theoretical analysis of its impact on the process of conditional averaging of data. Optimization conditions of the process were determined to decrease the variance of a characteristic of the conditional expected value. The obtained theoretical relations were compared with some examples of the experimental results.
In this paper, we continue a topic of modeling measuring processes by perceiving them as a kind of signal sampling. And, in this respect, note that an ideal model was developed in a previous work. Whereas here, we present its nonideal version. This extended model takes into account an effect, which is called averaging of a measured signal. And, we show here that it is similar to smearing of signal samples arising in nonideal signal sampling. Furthermore, we demonstrate in this paper that signal averaging and signal smearing mean principally the same, under the conditions given. So, they can be modeled in the same way. A thorough analysis of errors related to the signal averaging in a measuring process is given and illustrated with equivalent schemes of the relationships derived. Furthermore, the results obtained are compared with the corresponding ones that were achieved analyzing amplitude quantization effects of sampled signals used in digital techniques. Also, we show here that modeling of errors related to signal averaging through the so-called quantization noise, assumed to be a uniform distributed random signal, is rather a bad choice. In this paper, an upper bound for the above error is derived. Moreover, conditions for occurrence of hidden aliasing effects in a measured signal are given.
There is a consensus in signal processing that the Gaussian kernel and its partial derivatives enable the development of robust algorithms for feature detection. Fourier analysis and convolution theory have a central role in such development. In this paper, we collect theoretical elements to follow this avenue but using the q-Gaussian kernel that is a nonextensive generalization of the Gaussian one. Firstly, we review the one-dimensional q-Gaussian and its Fourier transform. Then, we consider the two-dimensional q-Gaussian and we highlight the issues behind its analytical Fourier transform computation. In the computational experiments, we analyze the q-Gaussian kernel in the space and Fourier domains using the concepts of space window, cut-o frequency, and the Heisenberg inequality.
Horns, teeth, claws, beaks… Given this mighty arsenal it’s a wonder there isn’t more physical conflict in the animal world, such as among birds.
This paper describes the theoretical background of electromagnetic induction from metal objects modelling. The response function of a specific case of object shape - a homogenous sphere from ferromagnetic and non-ferromagnetic material is introduced. Experimental data measured by a metal detector excited with a linearly frequency-swept signal are presented. As a testing target various spheres from different materials and sizes were used. These results should lead to better identification of the buried object.
Determination of the phase difference between two sinusoidal signals with noise components using samples of these signals is of interest in many measurement systems. The samples of signals are processed by one of many algorithms, such as 7PSF, UQDE and MSAL, to determine the phase difference. The phase difference result must be accompanied with estimation of the measurement uncertainty. The following issues are covered in this paper: the MSAL algorithm background, the ways of treating the bias influence on the phase difference result, comparison of results obtained by applying MSAL and the other mentioned algorithms to the same real signal samples, and evaluation of the uncertainty of the phase difference.
Time-Frequency (t-f) distributions are frequently employed for analysis of new-born EEG signals because of their non-stationary characteristics. Most of the existing time-frequency distributions fail to concentrate energy for a multicomponent signal having multiple directions of energy distribution in the t-f domain. In order to analyse such signals, we propose an Adaptive Directional Time-Frequency Distribution (ADTFD). The ADTFD outperforms other adaptive kernel and fixed kernel TFDs in terms of its ability to achieve high resolution for EEG seizure signals. It is also shown that the ADTFD can be used to define new time-frequency features that can lead to better classification of EEG signals, e.g. the use of the ADTFD leads to 97.5% total accuracy, which is by 2% more than the results achieved by the other methods.