Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 4
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

The speech signal can be described by three key elements: the excitation signal, the impulse response of the vocal tract, and a system that represents the impact of speech production through human lips. The primary carrier of semantic content in speech is primarily influenced by the characteristics of the vocal tract. Nonetheless, when it comes to parameterization coefficients, the irregular periodicity of the glottal excitation is a significant factor that leads to notable variations in the values of the feature vectors, resulting in disruptions in the amplitude spectrum with the appearance of ripples. In this study, a method is suggested to mitigate this phenomenon. To achieve this goal, inverse filtering was used to estimate the excitation and transfer functions of the vocal tract. Subsequently, using the derived parameterisation coefficients, statistical models for individual Polish phonemes were established as mixtures of Gaussian distributions. The impact of these corrections on the classification accuracy of Polish vowels was then investigated. The proposed modification of the parameterisation method fulfils the expectations, the scatter of feature vector values was reduced.
Go to article

Authors and Affiliations

Stanislaw Gmyrek
1
Robert Hossa
1
Ryszard Makowski
1

  1. Department of Acoustics, Multimedia and Signal Processing, Wroclaw University of Science and Technology, Wroclaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

The paper presents the analysis of modern Artificial Intelligence algorithms for the automated system supporting human beings during their conversation in Polish language. Their task is to perform Automatic Speech Recognition (ASR) and process it further, for instance fill the computer-based form or perform the Natural Language Processing (NLP) to assign the conversation to one of predefined categories. The State-of-the-Art review is required to select the optimal set of tools to process speech in the difficult conditions, which degrade accuracy of ASR. The paper presents the top-level architecture of the system applicable for the task. Characteristics of Polish language are discussed. Next, existing ASR solutions and architectures with the End-To-End (E2E) deep neural network (DNN) based ASR models are presented in detail. Differences between Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN) and Transformers in the context of ASR technology are also discussed.
Go to article

Authors and Affiliations

Karolina Pondel-Sycz
1
Piotr Bilski
1
ORCID: ORCID

  1. The Faculty of Electronics and Information Technology on Warsaw University of Technology, Nowowiejska 15/19 Av., 00-665 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

This article concerns research on deep learning models (DNN) used for automatic speech recognition (ASR). In such systems, recognition is based on Mel Frequency Cepstral Coefficients (MFCC) acoustic features and spectrograms. The latest ASR technologies are based on convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformers. The article presents an analysis of modern artificial intelligence algorithms adapted for automatic recognition of the Polish language. The differences between conventional architectures and ASR DNN End-To-End (E2E) models are discussed. Preliminary tests of five selected models (QuartzNet, FastConformer, Wav2Vec 2.0 XLSR, Whisper and ESPnet Model Zoo) on Mozilla Common Voice, Multilingual LibriSpeech and VoxPopuli databases are demonstrated. Tests were conducted for clean audio signal, signal with bandwidth limitation and degraded. The tested models were evaluated on the basis of Word Error Rate (WER).
Go to article

Authors and Affiliations

Karolina Pondel-Sycz
1
Agnieszka Paula Pietrzak
1
Julia Szymla
1

  1. Faculty of Electronics and Information Technology, Warsaw University of Technology, Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

The same speech sounds (phones) produced by different speakers can sometimes exhibit significant differences. Therefore, it is essential to use algorithms compensating these differences in ASR systems. Speaker clustering is an attractive solution to the compensation problem, as it does not require long utterances or high computational effort at the recognition stage. The report proposes a clustering method based solely on adaptation of UBM model weights. This solution has turned out to be effective even when using a very short utterance. The obtained improvement of frame recognition quality measured by means of frame error rate is over 5%. It is noteworthy that this improvement concerns all vowels, even though the clustering discussed in this report was based only on the phoneme a. This indicates a strong correlation between the articulation of different vowels, which is probably related to the size of the vocal tract.
Go to article

Authors and Affiliations

Robert Hossa
Ryszard Makowski

This page uses 'cookies'. Learn more