Journal bearings are the most common type of bearings in which a shaft freely rotates in a metallic sleeve. They find a lot of applications in industry, especially where extremely high loads are involved. Proper analysis of the various bearing faults and predicting the modes of failure beforehand are essential to increase the working life of the bearing. In the current study, the vibration data of a journal bearing in the healthy condition and in five different fault conditions are collected. A feature extraction method is employed to classify the different fault conditions. Automatic fault classification is performed using artificial neural networks (ANN). As the probability of a correct prediction goes down for a higher number of faults in ANN, the method is made more robust by incorporating deep neural networks (DNN) with the help of autoencoders. Training was done using the scaled conjugate gradient algorithm and the performance was calculated by the cross entropy method. Due to the increased number of hidden layers in DNN, it is possible to achieve a high efficiency of 100% with the feature extraction method.
Laughter is one of the most important paralinguistic events, and it has specific roles in human conversation. The automatic detection of laughter occurrences in human speech can aid automatic speech recognition systems as well as some paralinguistic tasks such as emotion detection. In this study we apply Deep Neural Networks (DNN) for laughter detection, as this technology is nowadays considered state-of-the-art in similar tasks like phoneme identification. We carry out our experiments using two corpora containing spontaneous speech in two languages (Hungarian and English). Also, as we find it reasonable that not all frequency regions are required for efficient laughter detection, we will perform feature selection to find the sufficient feature subset.
Speech enhancement is fundamental for various real time speech applications and it is a challenging task in the case of a single channel because practically only one data channel is available. We have proposed a supervised single channel speech enhancement algorithm in this paper based on a deep neural network (DNN) and less aggressive Wiener filtering as additional DNN layer. During the training stage the network learns and predicts the magnitude spectrums of the clean and noise signals from input noisy speech acoustic features. Relative spectral transform-perceptual linear prediction (RASTA-PLP) is used in the proposed method to extract the acoustic features at the frame level. Autoregressive moving average (ARMA) filter is applied to smooth the temporal curves of extracted features. The trained network predicts the coefficients to construct a ratio mask based on mean square error (MSE) objective cost function. The less aggressive Wiener filter is placed as an additional layer on the top of a DNN to produce an enhanced magnitude spectrum. Finally, the noisy speech phase is used to reconstruct the enhanced speech. The experimental results demonstrate that the proposed DNN framework with less aggressive Wiener filtering outperforms the competing speech enhancement methods in terms of the speech quality and intelligibility.