Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 2
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

The aim of this paper is to compare the efficiency of various outlier correction methods for ECG signal processing in biometric applications. The main idea is to correct anomalies in various segments of ECG waveform rather than skipping a corrupted ECG heartbeat in order to achieve better statistics. Experiments were performed using a self-collected Lviv Biometric Dataset. This database contains over 1400 records for 95 unique persons. The baseline identification accuracy without any correction is around 86%. After applying the outlier correction the results were improved up to 98% for autoencoder based algorithms and up to 97.1% for sliding Euclidean window. Adding outlier correction stage in the biometric identification process results in increased processing time (up to 20%), however, it is not critical in the most use-cases.

Go to article

Authors and Affiliations

Su Jun
Miroslaw Szmajda
Volodymyr Khoma
Yuriy Khoma
Dmytro Sabodashko
Orest Kochan
Jinfei Wang
Download PDF Download RIS Download Bibtex

Abstract

The effective utilisation of monitoring data of the coal mine is the core of realising intelligent mine. The complex and challenging underground environment, coupled with unstable sensors, can result in “dirty” data in monitoring information. A reliable data cleaning method is necessary to figure out how to extract high-quality information from large monitoring data sets while minimising data redundancy. Based on this, a cleaning method for sensor monitoring data based on stacked denoising autoencoders (SDAE) is proposed. The sample data of the ventilation system under normal conditions are trained by the SDAE algorithm and the upper limit of reconstruction errors is obtained by Kernel density estimation (KDE). The Apriori algorithm is used to study the correlation between monitoring data time series. By comparing reconstruction errors and error duration of test data with the upper limit of reconstruction error and tolerance time, cooperating with the correlation rule, the “dirty” data is resolved. The method is tested in the Dongshan coal mine. The experimental results show that the proposed method can not only identify the dirty data but retain the faulty information. The research provides effective basic data for fault diagnosis and disaster warning.
Go to article

Authors and Affiliations

Dan Zhao
1
ORCID: ORCID
Zhiyuan Shen
1
ORCID: ORCID
Zihao Song
1
ORCID: ORCID
Lina Xie
2
ORCID: ORCID

  1. Liaoning Technical University, College of Safety Science & Engineering, Fuxin 123000, China
  2. Shenyang Institute of Technology, Shenyang 110000, China

This page uses 'cookies'. Learn more