Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 4
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

Workpiece surface roughness measurement based on traditional machine vision technology faces numerous problems such as complex index design, poor robustness of the lighting environment, and slow detection speed, which make it unsuitable for industrial production. To address these problems, this paper proposes an improved YOLOv5 method for milling surface roughness detection. This method can automatically extract image features and possesses higher robustness in lighting environments and faster detection speed. We have effectively improved the detection accuracy of the model for workpieces located at different positions by introducing Coordinate Attention (CA). The experimental results demonstrate that this study’s improved model achieves accurate surface roughness detection for moving workpieces in an environment with light intensity ranging from 592 to 1060 lux. The average precision of the model on the test set reaches 97.3%, and the detection speed reaches 36 frames per second.
Go to article

Authors and Affiliations

Xiao Lv
1
Huaian Yi
1
Runji Fang
1
Shuhua Ai
1
Enhui Lu
2

  1. School of Mechanical and Control Engineering, Guilin University of Technology, Guilin, 541006,People’s Republic of China
  2. School of Mechanical Engineering, Yangzhou University, Yangzhou, 225009, People’s Republic of China
Download PDF Download RIS Download Bibtex

Abstract

Convolutional neural networks have achieved tremendous success in the areas of image processing and computer vision. However, they experience problems with low-frequency information such as semantic and category content and background color, and high-frequency information such as edge and structure. We propose an efficient and accurate deep learning framework called the multi-frequency feature extraction and fusion network (MFFNet) to perform image processing tasks such as deblurring. MFFNet is aided by edge and attention modules to restore high-frequency information and overcomes the multiscale parameter problem and the low-efficiency issue of recurrent architectures. It handles information from multiple paths and extracts features such as edges, colors, positions, and differences. Then, edge detectors and attention modules are aggregated into units to refine and learn knowledge, and efficient multi-learning features are fused into a final perception result. Experimental results indicate that the proposed framework achieves state-of-the-art deblurring performance on benchmark datasets.
Go to article

Authors and Affiliations

Jinsheng Deng
1
Zhichao Zhang
2
Xiaoqing Yin
1

  1. College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha 410000, China
  2. College of Computer, National University of Defense Technology, Changsha 410000, China
Download PDF Download RIS Download Bibtex

Abstract

Speech emotion recognition (SER) is a complicated and challenging task in the human-computer interaction because it is difficult to find the best feature set to discriminate the emotional state entirely. We always used the FFT to handle the raw signal in the process of extracting the low-level description features, such as short-time energy, fundamental frequency, formant, MFCC (mel frequency cepstral coefficient) and so on. However, these features are built on the domain of frequency and ignore the information from temporal domain. In this paper, we propose a novel framework that utilizes multi-layers wavelet sequence set from wavelet packet reconstruction (WPR) and conventional feature set to constitute mixed feature set for achieving the emotional recognition with recurrent neural networks (RNN) based on the attention mechanism. In addition, the silent frames have a disadvantageous effect on SER, so we adopt voice activity detection of autocorrelation function to eliminate the emotional irrelevant frames. We show that the application of proposed algorithm significantly outperforms traditional features set in the prediction of spontaneous emotional states on the IEMOCAP corpus and EMODB database respectively, and we achieve better classification for both speaker-independent and speaker-dependent experiment. It is noteworthy that we acquire 62.52% and 77.57% accuracy results with speaker-independent (SI) performance, 66.90% and 82.26% accuracy results with speaker-dependent (SD) experiment in final.
Go to article

Bibliography

  1.  M. Gupta, et al., “Emotion recognition from speech using wavelet packet transform and prosodic features”, J. Intell. Fuzzy Syst. 35, 1541–1553 (2018).
  2.  M. El Ayadi, et al., “Survey on speech emotion recognition: Features, classification schemes, and databases”, Pattern Recognit. 44, 572–587 (2011).
  3.  P. Tzirakis, et al., “End-to-end speech emotion recognition using deep neural networks”, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, Canada, 2018, pp. 5089‒5093, doi: 10.1109/ICASSP.2018.8462677.
  4.  J.M Liu, et al., “Learning Salient Features for Speech Emotion Recognition Using CNN”, 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia), Beijing, China, 2018, pp. 1‒5, doi: 10.1109/ACIIAsia.2018.8470393.
  5.  J. Kim, et al., “Learning spectro-temporal features with 3D CNNs for speech emotion recognition”, 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, USA, 2017, pp. 383‒388, doi: 10.1109/ACII.2017.8273628.
  6.  M.Y Chen, X.J He, et al., “3-D Convolutional Recurrent Neural Networks with Attention Model for Speech Emotion Recognition”, IEEE Signal Process Lett. 25(10), 1440‒1444 (2018), doi: 10.1109/LSP.2018.2860246.
  7.  V.N. Degaonkar and S.D. Apte, “Emotion modeling from speech signal based on wavelet packet transform”, Int. J. Speech Technol. 16, 1‒5 (2013).
  8.  T. Feng and S. Yang, “Speech Emotion Recognition Based on LSTM and Mel Scale Wavelet Packet Decomposition”, Proceedings of the 2018 International Conference on Algorithms, Computing and Artificial Intelligence (ACAI 2018), New York, USA, 2018, art. 38.
  9.  P. Yenigalla, A. Kumar, et. al”, Speech Emotion Recognition Using Spectrogram & Phoneme Embedding Promod”, Proc. Interspeech 2018, 2018, pp. 3688‒3692, doi: 10.21437/Interspeech.2018-1811.
  10.  J. Kim, K.P. Truong, G. Englebienne, and V. Evers, “Learning spectro-temporal features with 3D CNNs for speech emotion recognition”, 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, USA, 2017, pp. 383‒388, doi: 10.1109/ACII.2017.8273628.
  11.  S. Jing, X. Mao, and L. Chen, “Prominence features: Effective emotional features for speech emotion recognition”, Digital Signal Process. 72, 216‒231 (2018).
  12.  L. Chen, X. Mao, P. Wei, and A. Compare, “Speech emotional features extraction based on electroglottograph”, Neural Comput. 25(12), 3294–3317 (2013).
  13.  J. Hook, et al., “Automatic speech based emotion recognition using paralinguistics features”, Bull. Pol. Ac.: Tech. 67(3), 479‒488, 2019.
  14.  A. Mencattini, E. Martinelli, G. Costantini, M. Todisco, B. Basile, M. Bozzali, and C. Di Natale, “Speech emotion recognition using amplitude modulation parameters and a combined feature selection procedure”, Knowl.-Based Syst. 63, 68–81 (2014).
  15.  H. Mori, T. Satake, M. Nakamura, and H. Kasuya, “Constructing a spoken dialogue corpus for studying paralinguistic information in expressive conversation and analyzing its statistical/acoustic characteristics”, Speech Commun. 53(1), 36–50 (2011).
  16.  B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. Müller, and S. Narayanan, “Paralinguistics in speech and language—state- of-the-art and the challenge”, Comput. Speech Lang. 27(1), 4–39 (2013).
  17.  S. Mariooryad and C. Busso, “Compensating for speaker or lexical variabilities in speech for emotion recognition”, Speech Commun. 57, 1–12 (2014).
  18.  G.Trigeorgis et.al, “Adieu features? End-to-end speech emotion recognition using a deep convolutional recurrent network”, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 2016, pp. 5200‒5204, doi: 10.1109/ ICASSP.2016.7472669.
  19.  Y. Xie et.al, “Attention-based dense LSTM for speech emotion recognition”, IEICE Trans. Inf. Syst. E102.D, 1426‒1429 (2019).
  20.  F. Tao and G.Liu, “Advanced LSTM: A Study about Better Time Dependency Modeling in Emotion Recognition”, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, Canada, 2018, pp. 2906‒2910, doi: 10.1109/ ICASSP.2018.8461750.
  21.  Y.M. Huang and W. Ao, “Novel Sub-band Spectral Centroid Weighted Wavelet Packet Features with Importance-Weighted Support Vector Machines for Robust Speech Emotion Recognition”, Wireless Personal Commun. 95, 2223–2238 (2017).
  22.  Firoz Shah A. and Babu Anto P., “Wavelet Packets for Speech Emotion Recognition”, 2017 Third International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB), Chennai, 2017, pp. 479‒481, doi: 10.1109/ AEEICB.2017.7972358.
  23.  K.Wang, N. An, and L. Li, “Speech Emotion Recognition Based on Wavelet Packet Coefficient Model”, The 9th International Symposium on Chinese Spoken Language Processing, Singapore, China, 2014, pp. 478‒482, doi: 10.1109/ISCSLP.2014.6936710.
  24.  S. Sekkate, et al., “An Investigation of a Feature-Level Fusion for Noisy Speech Emotion Recognition”, Computers 8, 91 (2019).
  25.  Varsha N. Degaonkar and Shaila D. Apte, “Emotion Modeling from Speech Signal based on Wavelet Packet Transform”, Int. J. Speech Technol. 16, 1–5 (2013).
  26.  F. Eyben, et al., “Opensmile: the munich versatile and fast open-source audio feature extractor”, MM ’10: Proceedings of the 18th ACM international conference on Multimedia, 2010, pp. 1459‒1462.
  27.  Ch.-N. Anagnostopoulos, T. Iliou, and I. Giannoukos, “Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011,” Artif. Intell. 43(2), 155–177 (2015).
  28.  H. Meng, T. Yan, F. Yuan, and H. Wei, “Speech Emotion Recognition From 3D Log-Mel SpectrogramsWith Deep Learning Network”, IEEE Access 7, 125868‒125881 (2019).
  29.  Keren, Gil and B. Schuller. “Convolutional RNN: An enhanced model for extracting features from sequential data,” International Joint Conference on Neural Networks, 2016, pp. 3412‒3419.
  30.  C.W. Huang and S.S. Narayanan, “Deep convolutional recurrent neural network with attention mechanism for robust speech emotion recognition”, IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, 2017, pp. 583‒588, doi: 10.1109/ ICME.2017.8019296.
  31.  S. Mirsamadi, E. Barsoum, and C. Zhang, “Automatic Speech Emotion Recognition using Recurrent Neural Networks with Local Attention”, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, USA, 2017, pp. 2227- 2231, doi: 10.1109/ICASSP.2017.7952552.
  32.  Ashish Vaswani, et al., “Attention Is All You Need”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, USA, 2017.
  33.  X.J Wang, et al., “Dynamic Attention Deep Model for Article Recommendation by Learning Human Editors’ Demonstration”, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, Canada, 2017.
  34.  C. Busso, et al., “IEMOCAP: interactive emotional dyadic motion capture database,” Language Resources & Evaluation 42(4), 335 (2008).
  35.  F. Burkhardt, A. Paeschke, M. Rolfes, W.F. Sendlmeier, and B.Weiss, “A database of German emotional speech,” INTERSPEECH 2005 – Eurospeech, Lisbon, Portugal, 2005, pp. 1517‒1520.
  36.  D. Kingma and J. Ba, “International Conference on Learning Representations (ICLR)”, ICLR, San Diego, USA, 2015.
  37.  F. Vuckovic, G. Lauc, and Y. Aulchenko. “Normalization and batch correction methods for high-throughput glycomics”, Joint Meeting of the Society-For-Glycobiology 2016, pp. 1160‒1161.
Go to article

Authors and Affiliations

Hao Meng
1
Tianhao Yan
1
Hongwei Wei
1
Xun Ji
2

  1. Key laboratory of Intelligent Technology and Application of Marine Equipment (Harbin Engineering University), Ministry of Education, Harbin, 150001, China
  2. College of Marine Electrical Engineering, Dalian Maritime University, Dalian, 116026, China
Download PDF Download RIS Download Bibtex

Abstract

Specific emitter identification (SEI) can distinguish single-radio transmitters using the subtle features of the received waveform. Therefore, it is used extensively in both military and civilian fields. However, the traditional identification method requires extensive prior knowledge and is time-consuming. Furthermore, it imposes various effects associated with identifying the communication radiation source signal in complex environments. To solve the problem of the weak robustness of the hand-crafted feature method, many scholars at home and abroad have used deep learning for image identification in the field of radiation source identification. However, the classification method based on a real-numbered neural network cannot extract In-phase/Quadrature (I/Q)-related information from electromagnetic signals. To address these shortcomings, this paper proposes a new SEI framework for deep learning structures. In the proposed framework, a complex-valued residual network structure is first used to mine the relevant information between the in-phase and orthogonal components of the radio frequency baseband signal. Then, a one-dimensional convolution layer is used to a) directly extract the features of a specific one-dimensional time-domain signal sequence, b) use the attention mechanism unit to identify the extracted features, and c) weight them according to their importance. Experiments show that the proposed framework having complex-valued residual networks with attention mechanism has the advantages of high accuracy and superior performance in identifying communication radiation source signals.
Go to article

Bibliography

  1.  K. Talbot, P. Duley, and M. Hyatt, “Specific emitter identification and verification,” Technol. Rev. J., vol.  Spring/Summer, pp. 113‒133, 2003.
  2.  G. Baldini, G. Steri, and R. Giuliani, “Identification of wireless devices from their physical layer radio-frequency fingerprints,” in Encyclopedia of Information Science and Technology, 4th Ed., 2018, ch. 533, pp. 6136‒6146.
  3.  G. Huang, Y. Yuan, X. Wang, and Z. Huang, “Specific emitter identification based on nonlinear dynamical characteristics,” Can. J. Electr. Comput. Eng., vol. 39, no. 1, pp. 34–41, 2016, doi: 10.1109/cjece.2015.2496143.
  4.  Y. Pan, H. Peng, T. Li, and W. Wang, “High-fidelity symbol synchronization for specific emitter identification,” in 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 15‒17 March 2019, pp.  393‒398, doi: 10.1109/ITNEC.2019.8729181.
  5.  S. Ching-Sung and L. Chin-Teng, “A vector neural network for emitter identification,” IEEE Trans. Antennas Propag., vol. 50, no. 8, pp. 1120–1127, 2002, doi: 10.1109/TAP.2002.801387.
  6.  R. Klein, M.A. Temple, M.J. Mendenhall, and D.R. Reising, “Sensitivity Analysis of Burst Detection and RF Fingerprinting Classification Performance,” in 2009 IEEE International Conference on Communications, 14‒18 June 2009, pp. 1‒5, doi: 10.1109/ICC.2009.5199451.
  7.  O. Ureten and N. Serinken, “Bayesian detection of Wi-Fi transmitter RF fingerprints,” Electron. Lett., vol. 41, no. 6, pp.  373–374, 2005, doi: 10.1049/el:20057769.
  8.  J. Hall, M. Barbeau, and E. Kranakis, “Radio frequency fingerprinting for intrusion detection in wireless networks,” IEEE Trans. Dependable Secure Comput., vol. 12, pp. 1–35, 2005.
  9.  S. Guo, S. Akhtar, and A. Mella, “A Method for Radar Model Identification using Time-domain Transient Signals,” IEEE Transactions on Aerospace and Electronic Systems, pp. 1‒1, 2021, doi: 10.1109/TAES.2021.3074129.
  10.  Y. Yuan, Z. Huang, H. Wu, and X. Wang, “Specific emitter identification based on Hilbert–Huang transform-based time–frequency–energy distribution features,” IET Commun., vol. 8, no. 13, 2404–2412, 2014.
  11.  U. Satija, N. Trivedi, G. Biswal, and B. Ramkumar, “Specific emitter identification based on variational mode decomposition and spectral features in single hop and relaying scenarios,” IEEE Trans. Inf. Forensics Secur., vol. 14, no. 3, pp. 581–591, 2019, doi: 10.1109/ tifs.2018.2855665.
  12.  L. Li, H. B. Ji, and L. Jiang, “Quadratic time-frequency analysis and sequential recognition for specific emitter identification,” IET Signal Proc., vol. 5, pp. 568–574, 2011, doi: 10.1049/iet-spr.2010.0070.
  13.  J. Zhang, F. Wang, Z. Zhong, and O. Dobre, “Novel Hilbert Spectrum-Based Specific Emitter Identification for Single-Hop and Relaying Scenarios,” in 2015 IEEE Global Communications Conference (GLOBECOM), 6–10 Dec. 2015 2015, pp. 1–6, doi: 10.1109/ GLOCOM.2015.7417299.
  14.  Z. Tang and S. Li, “Steady Signal-Based Fractal Method of Specific Communications Emitter Sources Identification,” in Wireless Communications, Networking and Applications, Q.-A. Zeng, Ed., Springer India, New Delhi, 2016, pp. 809–819, doi: 10.1007/978-81- 322-2580-5_73.
  15.  F. Zhuo, Y. Huang, and J. Chen, “Radio frequency fingerprint extraction of radio emitter based on I/Q imbalance,” Procedia Comput. Sci., vol. 107, pp. 472–477, 2017, doi: 10.1016/j.procs.2017.03.092.
  16.  G. Huang, Y. Yuan, X. Wang, and Z. Huang, “Specific emitter identification based on nonlinear dynamical characteristics,” Can. J. Electr. Comput. Eng., vol. 39, pp. 34–41, 2016, doi: 10.1109/CJECE.2015.2496143.
  17.  M. Liu and J.F. Doherty, “Specific emitter identification using nonlinear device estimation,” in 2008 IEEE Sarnoff Symposium, 28‒30 April 2008, pp. 1–5, doi: 10.1109/SARNOF.2008.4520119.
  18.  M. Liu and J.F. Doherty, “Nonlinearity estimation for specific emitter identification in multipath channels,” IEEE Trans. Inf. Forensics Secur., vol. 6, no. 3, pp. 1076–1085, 2011, doi: 10.1109/TIFS.2011.2134848.
  19.  J. Dudczyk and A. Kawalec, “Specific emitter identification based on graphical representation of the distribution of radar signal parameters,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 63, no. 2, pp. 391–396, 2015, doi: 10.1515/bpasts-2015-0044.
  20.  Y. Zhao, L. Wu, J. Zhang, and Y. Li, “Specific emitter identification using geometric features of frequency drift curve,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 66, pp. 99–108, 2018, doi: 10.24425/119063.
  21.  Ł. Rybak and J. Dudczyk, “A geometrical divide of data particle in gravitational classification of moons and circles data sets,” Entropy, vol. 22, p. 16, 2020, doi: 10.3390/e22101088.
  22.  J. Han, T. Zhang, D. Ren, and X. Zheng, “Mechanism analysis and feature extraction algorithm of communication emitter fingerprint,” AEU Int. J. Electron. Commun., vol. 106, 2019, doi: 10.1016/j.aeue.2019.04.020.
  23.  M.K.M. Fadul, D.R. Reising, and M. Sartipi, “Identification of OFDM-based radios under rayleigh fading using RF-DNA and deep learning,” IEEE Access, vol. 9, pp. 17100–17113, 2021, doi: 10.1109/ACCESS.2021.3053491.
  24.  J. Dudczyk, “A method of feature selection in the aspect of specific identification of radar signals,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 65, pp. 113–119, 2017, doi: 10.1515/bpasts-2017-0014.
  25.  K. Sa, D. Lang, C. Wang, and Y. Bai, “Specific emitter identification techniques for the internet of things,” IEEE Access, vol. 8, pp. 1644– 1652, 2020, doi: 10.1109/ACCESS.2019.2962626.
  26.  Y. Pan, S. Yang, H. Peng, T. Li, and W. Wang, “Specific emitter identification based on deep residual networks,” IEEE Access, vol. 7, pp. 54425–54434, 2019, doi: 10.1109/ACCESS.2019.2913759.
  27.  L.J. Wong, W.C. Headley, S. Andrews, R.M. Gerdes, and A.J. Michaels, “Clustering learned CNN features from raw I/Q data for emitter identification,” in MILCOM 2018 – 2018 IEEE Military Communications Conference (MILCOM), 29–31 Oct. 2018, pp. 26–33, doi: 10.1109/MILCOM.2018.8599847.
  28.  M. Zhang, M. Diao, and L. Guo, “Convolutional neural networks for automatic cognitive radio waveform recognition,” IEEE Access, vol. 5, pp. 11074–11082, 2017, doi: 10.1109/ACCESS.2017.2716191.
  29.  N.E. West and T.O. Shea, “Deep architectures for modulation recognition,” in 2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), 6‒9 March 2017, pp.  1‒6, doi: 10.1109/DySPAN.2017.7920754.
  30.  Q. Wu et al., “Deep learning based RF fingerprinting for device identification and wireless security,” Electron. Lett., vol. 54, pp. 1405–1407, 2018, doi: 10.1049/el.2018.6404.
  31.  L. Ding, W. Shilian, F. Wang, and W. Zhang, “Specific emitter identification via convolutional neural networks,” IEEE Commun. Lett., vol. 22, no. 12, pp. 2591–2594, 2018, doi: 10.1109/LCOMM.2018.2871465.
  32.  G. Baldini, C. Gentile, R. Giuliani, and G. Steri, “A comparison of techniques for radiometric identification based on deep convolutional neural networks,” Electron. Lett., vol. 55, pp. 90–92, 2018, doi: 10.1049/el.2018.6229.
  33.  K. Huang, J. Yang, H. Liu, and P. Hu, “Deep adversarial neural network for specific emitter identification under varying frequency,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 69, no. 2, p. e136716, 2021, doi: 10.24425/bpasts.2021.136737.
  34.  B. Wu, S. Yuan, P. Li, Z. Jing, S. Huang, and Y. Zhao, “Radar emitter signal recognition based on one-dimensional convolutional neural network with attention mechanism,” Sensors, vol. 20, p.  6350, 2020, doi: 10.3390/s20216350.
  35.  T.J. Shea and N. West, “Radio Machine Learning Dataset Generation with GNU Radio,” Proceedings of the GNU Radio Conference, vol 1, no.1, 2016. [Online]. Available: https://pubs.gnuradio.org/index.php/grcon/article/view/11.
  36.  T.J. O’Shea, T. Roy, and T.C. Clancy, “Over-the-Air deep learning based radio signal classification,” IEEE J. Sel. Top. Signal Process., vol. 12, no. 1, pp. 168‒179, 2018, doi: 10.1109/JSTSP.2018.2797022.
  37.  A. Hirose and S. Yoshida, “Generalization characteristics of complex-valued feedforward neural networks in relation to signal coherence,” IEEE Trans. Neural Networks, vol. 23, pp.  541–551, 2012, doi: 10.1109/TNNLS.2012.2183613.
  38.  C. Trabelsi et al., “Deep complex networks,” presented at the ICLR 2018, 2018.
  39.  Y. Ying and J. Li, “Radio frequency fingerprint identification based on deep complex residual network,” IEEE Access, vol. 8, pp. 204417– 204424, 2020, doi: 10.1109/ACCESS.2020.3037206.
  40.  S. Woo, J. Park, J.-Y. Lee, and I.S. Kweon, “CBAM: Convolutional Block Attention Module,” in Computer Vision – ECCV 2018, Cham, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., Springer International Publishing, 2018, pp. 3–19.
Go to article

Authors and Affiliations

Lingzhi Qu
1
Junan Yang
1
Keju Huang
1
Hui Liu
1

  1. College of Electronic Engineering, National University of Defense Technology, Hefei, Anhui 230037, People’s Republic of China

This page uses 'cookies'. Learn more