Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 9
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

For a higher classification accuracy of disturbance signals of power quality, a disturbance classification method for power quality based on gram angle field and multiple transfer learning is proposed in this paper. Firstly, the one-dimensional disturbance signal of power quality is transformed into a Gramian angular field (GAF) coded image by using the gram angle field, and then three ResNet networks are constructed. The disturbance signals with representative signal-to-noise ratios of 0 dB, 20 dB and 40 dB are selected as the input of the sub-model to train the three sub-models, respectively. During this period, the training weights of the sub-models are transferred in turn by using the method of multiple transfer learning. The pre-training weight of the latter model is inherited from the training weight of the previous model, and the weight processing methods of partial freezing and partial fine-tuning are adopted to ensure the optimal training effect of the model. Finally, the features of the three sub-models are fused to train the classifier with a full connection layer, and a disturbance classification model for power quality is obtained. The simulation results show that the method has higher classification accuracy and better anti-noise performance, and the proposed model has good robustness and generalization.
Go to article

Authors and Affiliations

Peng Heping
1
Mo Wenxiong
1
Wang Yong
1
Luan Le
1
Xu Zhong
1

  1. Guangzhou Power Supply Bureau of Guangdong Power Grid Co., Ltd.Guangdong, Guangzhou 510620, China
Download PDF Download RIS Download Bibtex

Abstract

Acquiring labels in anomaly detection tasks is expensive and challenging. Therefore, as an effective way to improve efficiency, pretraining is widely used in anomaly detection models, which enriches the model's representation capabilities, thereby enhancing both performance and efficiency in anomaly detection. In most pretraining methods, the decoder is typically randomly initialized. Drawing inspiration from the diffusion model, this paper proposed to use denoising as a task to pretrain the decoder in anomaly detection, which is trained to reconstruct the original noise-free input. Denoising requires the model to learn the structure, patterns, and related features of the data, particularly when training samples are limited. This paper explored two approaches on anomaly detection: simultaneous denoising pretraining for encoder and decoder, denoising pretraining for only decoder. Experimental results demonstrate the effectiveness of this method on improving model’s performance. Particularly, when the number of samples is limited, the improvement is more pronounced.
Go to article

Authors and Affiliations

Xianlei Ge
1 2
Xiaoyan Li
3
Zhipeng Zhang
1

  1. School of Electronic Engineering, Huainan Normal University, China
  2. College of Computing and Information Technologies, National University, Philippines
  3. School of Computer, Huainan Normal University, China
Download PDF Download RIS Download Bibtex

Abstract

Specific emitter identification (SEI) is the process of identifying individual emitters by analyzing the radio frequency emissions, based on the fact that each device contains unique hardware imperfections. While the majority of previous research focuses on obtaining features that are discriminative, the reliability of the features is rarely considered. For example, since device characteristics of the same emitter vary when it is operating at different carrier frequencies, the performance of SEI approaches may degrade when the training data and the test data are collected from the same emitters with different frequencies. To improve performance of SEI under varying frequency, we propose an approach based on continuous wavelet transform (CWT) and domain adversarial neural network (DANN). The proposed approach exploits unlabeled test data in addition to labeled training data, in order to learn representations that are discriminative for individual emitters and invariant for varying frequencies. Experiments are conducted on received signals of five emitters under three carrier frequencies. The results demonstrate the superior performance of the proposed approach when the carrier frequencies of the training data and the test data differ.
Go to article

Bibliography

  1. K.I. Talbot, P.R. Duley, and M.H. Hyatt, “Specific emitter identification and verification”, Technol. Rev. 2003, 113–133, (2003).
  2. G. Baldini, G. Steri, and R. Giuliani, “Identification of wireless devices from their physical layer radio-frequency fingerprints”, in: Encyclopedia of Information Science and Technology, pp. 6136–6146, 4th Edition, IGI Global, 2018.
  3. A.E. Spezio, “Electronic warfare systems”, IEEE Trans. Microw. Theory Tech. 50(3), 633–644 (2002).
  4. O. Ureten and N. Serinken, “Wireless security through rf fingerprinting”, Can. J. Electr. Comp. Eng. 32(1), 27–33 (2007).
  5. S.U. Rehman, K.W. Sowerby, and C. Coghill, “Radio-frequency fingerprinting for mitigating primary user emulation attack in low-end cognitive radios”, IET Commun. 8(8), 1274–1284 (2014).
  6. V. Brik, S. Banerjee, M. Gruteser, and S. Oh, “Wireless device identification with radiometric signatures”, in: Proceedings of the 14th ACM international Conference on Mobile Computing and Networking, San Francisco, USA: ACM, 2008, pp. 116– 127.
  7. Y. Huang, et al., “Radio frequency fingerprint extraction of radio emitter based on i/q imbalance”, Procedia Computer Science 107, 472–477 (2017).
  8. L.J. Wong, W.C. Headley, and A.J. Michaels, “Specific emitter identification using convolutional neural network-based iq imbalance estimators”, IEEE Access 7, 33544–33555 (2019).
  9. G. López-Risueño, J. Grajal, and A. Sanz-Osorio, “Digital channelized receiver based on time-frequency analysis for signal interception”, IEEE Trans. Aerosp. Electron. Syst. 41(3), 879–898 (2005).
  10. C. Bertoncini, K. Rudd, B. Nousain, and M. Hinders, “Wavelet fingerprinting of radio-frequency identification (rfid) tags”, EEE Trans. Ind. Electron. 59(12), 4843–4850 (2011).
  11. J. Lundén and V. Koivunen, “Automatic radar waveform recognition”, IEEE J. Sel. Top. Signal Process. 1(1), 124–136 (2007).
  12. L. Li, H.B. Ji, and L. Jiang, “Quadratic time–frequency analysis and sequential recognition for specific emitter identification”, IET Signal Process. 5(6), 568–574 (2011).
  13. Y. Yuan, Z. Huang, H. Wu, and X. Wang, “Specific emitter identification based on Hilbert–Huang transform-based time– frequency–energy distribution features”, IET Commun. 8(13), 2404–2412 (2014).
  14. J. Zhang, F. Wang, Z. Zhong, and O. Dobre, “Novel hilbert spectrum-based specific emitter identification for single-hop and relaying scenarios”, in: 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, USA, IEEE, 2015, pp. 1–6.
  15. J. Zhang, F. Wang, O. Dobre, and Z. Zhong, “Specific emitter identification via Hilbert–Huang transform in single-hop and relaying scenarios”, IEEE Trans. Inf. Forensic Secur. 11(6), 1192–1205 (2016).
  16. Z. Tang and S. Li, “Steady signal-based fractal method of specific communications emitter sources identification”, in: Wireless Communications, Networking and Applications, pp. 809– 819, Springer, 2016.
  17. G. Huang, Y. Yuan, X. Wang, and Z. Huang, “Specific emitter identification based on nonlinear dynamical characteristics”, Can. J. Electr. Comp. Eng. 39(1), 34–41 (2016).
  18. Y. Jia, S. Zhu, and L. Gan, “Specific emitter identification based on the natural measure”, Entropy 19(3), 117 (2017).
  19. J. Dudczyk and A. Kawalec, “Specific emitter identification based on graphical representation of the distribution of radar signal parameters”, Bull. Pol. Acad. Sci. Tech. Sci. 63(2), 391–396 (2015).
  20. Y. Zhao, Y. Li, L. Wui, and J. Zhang, “Specific emitter identification using geometric features of frequency drift curve”, Bull. Pol. Acad. Sci. Tech. Sci. 66(1), 99–108 (2018).
  21. L. Rybak and J. Dudczyk, “A geometrical divide of data particle in gravitational classification of moons and circles data sets”, Entropy 22(10), 1088 (2020).
  22. Q. Wu, et al., “Deep learning based rf fingerprinting for device identification and wireless security”, Electron. Lett. 54(24), 1405–1407 (2018).
  23. L. Ding, S. Wang, F. Wang, and W. Zhang, “Specific emitter identification via convolutional neural networks”, IEEE Commun. Lett. 22(12), 2591–2594 (2018).
  24. K. Merchant, S. Revay, G. Stantchev, and B. Nousain, “Deep learning for rf device fingerprinting in cognitive communication networks”, IEEE J. Sel. Top. Signal Process. 12(1), 160–167 (2018).
  25. Y. Pan, S. Yang, H. Peng, T. Li, and W. Wang, “Specific emitter identification based on deep residual networks”, IEEE Access 7, 54425– 54434 (2019).
  26. J. Matuszewski and D. Pietrow, “Recognition of electromagnetic sources with the use of deep neural networks”, in XII Conference on Reconnaissance and Electronic Warfare Systems, 2019, vol. 11055, pp. 100–114, doi: 10.1117/12.2524536.
  27. L.J. Wong, W.C. Headley, S. Andrews, R.M. Gerdes, and A.J. Michaels, “Clustering learned cnn features from raw i/q data for emitteridentification”, in: MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM), Los Angeles, USA, 2018, pp. 26–33.
  28. G. Baldini, C. Gentile, R. Giuliani, and G. Steri, “Comparison of techniques for radiometric identification based on deep convolutional neural networks”, Electron. Lett. 55(2), 90–92 (2018).
  29. W. Wang, Z. Sun, S. Piao, B. Zhu, and K. Ren, “Wireless physical-layer identification: Modeling and validation”, IEEE Trans. Inf. Forensic Secur. 11(9), 2091–2106 (2016).
  30. S. Andrews, R.M. Gerdes, and M. Li, “Towards physical layer identification of cognitive radio devices”, IEEE Conference on Communications and Network Security (CNS), Las Vegas, USA, IEEE, 2017, pp. 1–9.
  31. I.F. Akyildiz, W.Y. Lee, M.C. Vuran, and S. Mohanty, “Next generation/dynamic spectrum access/cognitive radio wireless networks: A survey”, Comput. Netw. 50(13), 2127–2159 (2006).
  32. S.J. Pan and Q. Yang, “A survey on transfer learning”, IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009), doi: 10.1109/ TKDE.2009.191.
  33. Y. Sharaf-Dabbagh and W. Saad, “Transfer learning for device fingerprinting with application to cognitive radio networks”, in: 2015 IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Hong Kong, China, 2015, pp. 2138–2142.
  34. M. Wang and W. Deng, “Deep visual domain adaptation: A survey”, Neurocomputing 312, 135–153 (2018). doi: 10.1016/j. neucom.2018.05.083.
  35. Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation”, in: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 2015, pp. 1180–1189.
  36. Y. Ganin, et al., “Domain-adversarial training of neural networks”, J. Mach. Learn. Res. 17(1), 2096–2030 (2016).
  37. G. Wilson and D.J. Cook, “A survey of unsupervised deep domain adaptation”, CoRR, 2018, abs/1812.02849. Available from: http://arxiv. org/abs/1812.02849.
  38. I. Goodfellow, et al., “Generative adversarial nets”, in: Advances in Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2672–2680.
  39. U. Satija, N. Trivedi, G. Biswal, and B. Ramkumar, “Specific emitter identification based on variational mode decomposition and spectral features in single hop and relaying scenarios”, IEEE Trans. Inf. Forensic Secur. 14(3), 581–591 (2018).
  40. E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation”, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, 2017, pp. 7167–7176.
  41. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition”, in: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016, pp. 770–778.
  42. L. Maaten and G. Hinton, “Visualizing data using t-sne”, J. Mach. Learn. Res. 9, 2579–2605 (2008).
  43. C. Chen, et al., “Progressive feature alignment for unsupervised domain adaptation”, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 627–636.
  44. P. Panareda-Busto and J. Gall, “Open set domain adaptation”, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 754–763.
  45. Z. Cao, M. Long, J. Wang, and M.I. Jordan, “Partial transfer learning with selective adversarial networks”, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 2018, pp. 2724–2732.
  46. K. You, M. Long, Z. Cao, J. Wang, and M.I. Jordan, “Universal domain adaptation”, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA,2019.
Go to article

Authors and Affiliations

Keju Huang
1
Junan Yang
1
Hui Liu
1
Pengjiang Hu
1

  1. College of Electronic Engineering, National University of Defense Technology, Hefei, Anhui 230037, China
Download PDF Download RIS Download Bibtex

Abstract

Large concrete structures such as buildings, bridges, and tunnels are aging. In Japan and many other countries, those built during economic reconstruction after World War II are about 60 to 70 years old, and flacking and other problems are becoming more noticeable. Periodic inspections were made mandatory by government and ministerial ordinance during the 2013-2014 fiscal year, and inspections based on the new standards have just begun. There are various methods to check the soundness of concrete, but the hammering test is widely used because it does not require special equipment. However, long experience is required to master the hammering test. Therefore, mechanization is desired. Although the difference between the sound of a defective part and a normal part is very small, we have shown that neural network is useful in our research. To use this technology in the actual field, it is necessary to meet the forms of concrete structures in various conditions. For example, flacking in concrete exists at various depths, and it is impossible to learn about flacking in all cases. This paper presents the results of a study of the possibility of finding flacking at different depths with a single inspection learning model and an idea to increase the accuracy of a learning model when we use a rolling hammer.
Go to article

Authors and Affiliations

Atsushi Ito
1
ORCID: ORCID
Masafumi Koike
2
Katsuhiko Hibino
3

  1. Faculty of Economics, Chuo University, Tokyo, Japan
  2. Department of Engineering, Utsunomiya University,Tochigi, Japan
  3. Port Denshi Corporation, Tokyo, Japan
Download PDF Download RIS Download Bibtex

Abstract

In recent years, deep learning and especially deep neural networks (DNN) have obtained amazing performance on a variety of problems, in particular in classification or pattern recognition. Among many kinds of DNNs, the convolutional neural networks (CNN) are most commonly used. However, due to their complexity, there are many problems related but not limited to optimizing network parameters, avoiding overfitting and ensuring good generalization abilities. Therefore, a number of methods have been proposed by the researchers to deal with these problems. In this paper, we present the results of applying different, recently developed methods to improve deep neural network training and operating. We decided to focus on the most popular CNN structures, namely on VGG based neural networks: VGG16, VGG11 and proposed by us VGG8. The tests were conducted on a real and very important problem of skin cancer detection. A publicly available dataset of skin lesions was used as a benchmark. We analyzed the influence of applying: dropout, batch normalization, model ensembling, and transfer learning. Moreover, the influence of the type of activation function was checked. In order to increase the objectivity of the results, each of the tested models was trained 6 times and their results were averaged. In addition, in order to mitigate the impact of the selection of learning, test and validation sets, k-fold validation was applied.

Go to article

Authors and Affiliations

M. Grochowski
A. Kwasigroch
A. Mikołajczyk
Download PDF Download RIS Download Bibtex

Abstract

The article presents research on animal detection in thermal images using the YOLOv5 architecture. The goal of the study was to obtain a model with high performance in detecting animals in this type of images, and to see how changes in hyperparameters affect learning curves and final results. This manifested itself in testing different values of learning rate, momentum and optimizer types in relation to the model’s learning performance. Two methods of tuning hyperparameters were used in the study: grid search and evolutionary algorithms. The model was trained and tested on an in-house dataset containing images with deer and wild boars. After the experiments, the trained architecture achieved the highest score for Mean Average Precision (mAP) of 83%. These results are promising and indicate that the YOLO model can be used for automatic animal detection in various applications, such as wildlife monitoring, environmental protection or security systems.
Go to article

Authors and Affiliations

Łukasz Popek
1 3
Rafał Perz
2 3
Grzegorz Galiński
1
Artur Abratański
2 3

  1. Warsaw University of Technology, Faculty of Electronics and Information Technology
  2. Warsaw University of Technology,Faculty of Power and Aeronautical Engineering
  3. Sieć badawcza Rafał Perz, Poland
Download PDF Download RIS Download Bibtex

Abstract

In the domain of affective computing different emotional expressions play an important role. To convey the emotional state of human emotions, facial expressions or visual cues are used as an important and primary cue. The facial expressions convey humans affective state more convincingly than any other cues. With the advancement in the deep learning techniques, the convolutional neural network (CNN) can be used to automatically extract the features from the visual cues; however variable sized and biased datasets are a vital challenge to be dealt with as far as implementation of deep models is concerned. Also, the dataset used for training the model plays a significant role in the retrieved results. In this paper, we have proposed a multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on the visual cues. We have used a CNN and pre-trained ResNet-50 model for the transfer learning. VGGFace model’s weights are used to initialize weights of ResNet50 for fine-tuning the model. The proposed system shows significant improvement in test accuracy in affective state recognition compared to the singleton CNN model developed from scratch or transfer learned model. The proposed methodology is validated on The Karolinska Directed Emotional Faces (KDEF) dataset with 77.85% accuracy. The obtained results are promising compared to the existing state of the art methods.
Go to article

Bibliography

  1.  W. Łosiak and J. Siedlecka, “Recognition of facial expressions of emotions in schizophrenia,” Pol. Psychol. Bull., vol. 44, no. 2, pp. 232– 238, 2013, doi: 10.2478/ppb-2013-0026.
  2.  I.M. Revina and W.R.S. Emmanuel, “A Survey on human face expression recognition techniques,” J. King Saud Univ. Comput. Inf. Sci., vol. 33, no. 6, pp. 619–628, 2021, doi: 10.1016/j.jksuci.2018.09.002.
  3.  I.J. Goodfellow et al., “Challenges in representation learning: A report on three machine learning contests,” Neural Networks, vol. 64, pp. 59‒63, 2015, doi: 10.1016/j.neunet.2014.09.005.
  4.  M. Mohammadpour, H. Khaliliardali, S.M.R. Hashemi, and M.M. AlyanNezhadi. “Facial emotion recognition using deep convolution- al networks,” in Proc. IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, 2017, pp. 0017–0021.
  5.  D.V. Sang, N. Van Dat, and D.P. Thuan, “Facial expression recognition using deep convolutional neural networks,” in Proc. 9th Interna- tional Conference on Knowledge and Systems Engineering (KSE), Hue, 2017, pp. 130‒135.
  6.  C. Pramerdorfer and M. Kampel, “Facial expression recognition using convolutional neural networks: state of the art,” ArXiv, abs/1612.02903.
  7.  J. Yan et al., “Multi-cue fusion for emotion recognition in the wild,” Neurocomputing, vol. 309, pp.  27–35, 2018, doi: 10.1016/j.neu- com.2018.03.068.
  8.  T.A. Rashid, “Convolutional neural networks based method for improving facial expression recognition,” in Advances in Intelligent Systems and Computing, Intelligent Systems Technologies, and Applications 2016. ISTA 2016, J. C. Rodriguez, S. Mitra, S. Thampi, E. S. El-Alfy (Eds)., vol. 530, 2016, Springer, Cham.
  9.  A. Ruiz-Garcia, M. Elshaw, A. Altahhan, and V. Palade, “Deep learning for emotion recognition in faces,” in Artificial Neural Net- works and Machine Learning – ICANN 2016, A.E.P. Villa, P. Masulli, and A.J.P. Rivero (Eds.), vol. 9887, 2016, Switzerland: Springer Verlag, pp. 38‒46, doi: 10.1007/978-3-319-44781-0_5.
  10.  M. Shamim Hossain and Ghulam Muhammad, “Emotion recognition using deep learning approach from audio-visual emotional big data,” Information Fusion, vol. 49, pp. 69‒78, 2019, doi: 10.1016/j.inffus.2018.09.008.
  11.  A.S. Vyas, H.B. Prajapati, and V.K. Dabhi, “Survey on face expression recognition using CNN,” in Proc. 5th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 2019, pp. 102‒106.
  12.  M.M. Taghi Zadeh, M. Imani, and B. Majid, “Fast facial emotion recognition using convolutional neural networks and Gabor filters,” in Proc. 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI), Tehran, Iran, 2019, pp. 577–581.
  13.  A. Renda, M. Barsacchi, A. Bechini, and F. Marcelloni, “Comparing ensemble strategies for deep learning: An application to facial ex- pression recognition,” Expert Syst. Appl., vol. 136, pp. 1‒11, 2019, doi: 10.1016/j.eswa.2019.06.025.
  14.  H. Ding, S. Zhou, and R. Chellappa, “FaceNet2ExpNet: Regularizing a deep face recognition net for expression recognition,” in Proc. 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017), Washington, USA, 2017, pp. 118‒126. doi: 10.1109/FG.2017.23.
  15.  J. Li et al., “Facial Expression Recognition by Transfer Learning for Small Datasets,” in Security with Intelligent Computing and Big-data Services. SICBS 2018. Advances in Intelligent Systems and Computing, C. N. Yang, S. L. Peng, L. Jain, (Eds.), vol. 895, Springer, Cham, 2018.
  16.  Y. Wang, C. Wang, L. Luo, and Z. Zhou, “Image Classification Based on transfer Learning of Convolutional neural network,” in Proc. Chinese Control Conference (CCC), Guangzhou, China, 2019, pp.  7506‒7510.
  17.  I. Lee, H. Jung, C. H. Ahn, J. Seo, J. Kim, and O. Kwon, “Real-time personalized facial expression recognition system based on deep learning,” in Proc. 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, USA, 2016, pp. 267‒268.
  18.  J. Chen, X. Liu, P. Tu, and A. Aragones, “Person-specific expression recognition with transfer learning,” in Proc 19th IEEE International Conference on Image Processing, Orlando, USA, 2012, pp. 2621‒2624.
  19.  Y. Fan, J.C.K. Lam, and V.O.K. Li, “Multi-Region Ensemble Convolutional Neural Network for Facial Expression Recognition”, arXiv, 2018, cs. CV, https://arxiv.org/abs/1807.10575v1.
  20.  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016, pp.  770‒778.
  21.  J. Chmielińska and J. Jakubowski, “Detection of driver fatigue symptoms using transfer learning,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 66, pp. 869‒874, 2018, doi: 10.24425/bpas.2018.125934.
  22.  E. Lukasik et al., “Recognition of handwritten Latin characters with diacritics using CNN,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 69, no. 1, 2021, article number: e136210, doi: 10.24425/bpasts.2020.136210.
  23.  H. Zhang, A. Jolfaei, and M. Alazab, “A Face Emotion Recognition Method Using Convolutional Neural Network and Image Edge Computing,” IEEE Access, vol. 7, pp. 159081‒159089, 2019, doi: 10.1109/ACCESS.2019.2949741.
  24.  HackerEarth, “Transfer Learning Introduction Tutorials and Notes: Machine Learning,” [Online]. Available: https://www.hackerearth. com/practice/machine-learning/transfer-learning/transfer-learning-intro/tutorial/
  25.  S. Minaee, M. Minaei, and A. Abdolrashidi, “Deep-emotion: Facial expression recognition using attentional convolutional network,” Sensors, vol. 21, no. 9, p. 3046, 2021, doi: 10.3390/s21093046.
  26.  M.J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proc. 3rd IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 200‒205, doi: 10.1109/AFGR.1998.670949.
  27.  P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition – Workshops, San Francisco, USA, 2010, pp.  94‒101, doi: 10.1109/CVPRW.2010.5543262.
  28.  M.F.H. Siddiqui and A.Y. Javaid, “A multimodal facial emotion recognition framework through the fusion of speech with visible and infrared images,” Multimodal Technol. Interact., vol. 4, no. 3, p. 46, 2020, doi: 10.3390/mti4030046.
  29.  M.S. Zia, M. Hussain, and M.A.A Jaffar, “Novel spontaneous facial expression recognition using dynamically weighted majority voting based ensemble classifier,” Multimed. Tools Appl., vol. 77, pp. 25537–25567, 2018.
  30.  D. Lundqvist, A. Flykt, and A. Öhman, “The Karolinska Directed Emotional Faces – KDEF,” CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, 1998.
Go to article

Authors and Affiliations

Nagesh Jadhav
1
Rekha Sugandhi
1

  1. MIT ADT University, Pune, Maharashtra, 412201, India
Download PDF Download RIS Download Bibtex

Abstract

Variation in powertrain parameters caused by dimensioning, manufacturing and assembly inaccuracies may prevent model-based virtual sensors from representing physical powertrains accurately. Data-driven virtual sensors employing machine learning models offer a solution for including variations in the powertrain parameters. These variations can be efficiently included in the training of the virtual sensor through simulation. The trained model can then be theoretically applied to real systems via transfer learning, allowing a data-driven virtual sensor to be trained without the notoriously labour-intensive step of gathering data from a real powertrain. This research presents a training procedure for a data-driven virtual sensor. The virtual sensor was made for a powertrain consisting of multiple shafts, couplings and gears. The training procedure generalizes the virtual sensor for a single powertrain with variations corresponding to the aforementioned inaccuracies. The training procedure includes parameter randomization and random excitation. That is, the data-driven virtual sensor was trained using data from multiple different powertrain instances, representing roughly the same powertrain. The virtual sensor trained using multiple instances of a simulated powertrain was accurate at estimating rotating speeds and torque of the loaded shaft of multiple simulated test powertrains. The estimates were computed from the rotating speeds and torque at the motor shaft of the powertrain. This research gives excellent grounds for further studies towards simulation-to-reality transfer learning, in which a virtual sensor is trained with simulated data and then applied to a real system.
Go to article

Authors and Affiliations

Aku Karhinen
1
ORCID: ORCID
Aleksanteri Hamalainen
1
Mikael Manngard
2
Jesse Miettinen
1
Raine Viitala
1

  1. Department of Mechanical Engineering, Aalto University, 02150, Espoo, Finland
  2. Novia University of Applied Sciences, Juhana Herttuan puistokatu 21, 20100 Turku, Finland

This page uses 'cookies'. Learn more