TitleRecognition of Human Emotion from a Speech Signal Based on Plutchik's Model
Journal titleInternational Journal of Electronics and Telecommunications
Divisions of PASNauki Techniczne
International Journal of Electronics and Telecommunications (IJET, eISSN 2300-1933, untill 2013 also print ISSN 2081-8491) is a periodical of Electronics and Telecommunications Committee of Polish Academy of Sciences and it is published by Warsaw Science Publishers of PAS. It continues tradition of the Electronics and Telecommunications Quarterly (ISSN 0867-6747) established in 1955 as the Rozprawy Elektrotechniczne. The IJET is a scientific periodical where papers present the results of original, theoretical, experimental and reviewed works. They consider widely recognized aspects of modern electronics, telecommunications, microelectronics, optoelectronics, radioelectronics and medical electronics.
The authors are outstanding scientists, well‐known experienced specialists as well as young researchers – mainly candidates for a doctor's degree. The papers present original approaches to problems, interesting research results, critical estimation of theories and methods, discuss current state or progress in a given branch of technology and describe development prospects. All the papers published in IJET are reviewed by international specialists who ensure that the publications are recognized as author's scientific output.
The printed periodical is distributed among all those who deal with electronics and telecommunications in national scientific centers as well as in numeral foreign institutions, and it is subscribed by many specialists and libraries. Its electronic version is available at http://ijet.pl.
The papers received are published within half a year if the cooperation between author and the editorial staff is efficient. The papers may be submitted to the editorial office by the journal web page http://ijet.pl.
PublisherPolish Academy of Sciences Committee of Electronics and Telecommunications
IdentifierISSN 2081-8491 (until 2012) ; eISSN 2300-1933 (since 2013)
ReferencesPlutchik R. (2001), The nature of emotion, American Scientist, 89. ; Iriea G. (2010), Affective audio-visual words and latent topic driving model for realizing movie affective scene classification, IEEE Transactions on Multimedia, 12. ; Miyakoshi Y. (null), Facial emotion detection considering partial occlusion of face using bayesian network, null. ; Z. Yang, "Multimodal datafusion for aggression detection in train compartments." February 2006. ; T. Kostoulas, T. Ganchev, and N. Fotakis, "Study on speaker-independent emotion recognition from speech on real-world data," <i>Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction 2008.</i> ; Kotti M. (null), Speaker-independent negative emotion recognition, null. ; J. Cichosz and K. Ślot, "Low-dimensional feature space derivation for emotion recognition." <i>ICSES 2006.</i> ; Lugger M. (2007), The relevance of voice quality features in speaker independent emotion recognition, Proc. ICASSP. ; Z. Ciota, <i>Metody przetwarzania sygnałów akustycznych w komputerowej analizie mowy</i>, 2010, in Polish. ; T. Polzehl, A. Schmitt, and F. Metze, "Approaching multi-lingual emotion recognition from speech - on language dependency of acoustic/prosodic features for anger recognition." ; Kamaruddin N. (2010), Driver behavior analysis through speech emotion understanding, null. ; Hidayati R. (2010), The extraction of acoustic features of infant cry for emotion detection based on pitch and formants, null. ; Mower E. (2011), A framework for automatic human emotion classification using emotion profiles, Audio, Speech, and Language Processing, 19. ; Vidrascu L. (2005), Detection of real-life emotions in call centers, Proc. Eurospeech Lizbona. ; K. Izdebski, <i>Emotions in the Human Voice Volume I Foundations</i>, 2007. ; Wang Y. (2008), Recognizing human emotional state from audiovisual signals, null, 10. ; Yeqing Y. (2011), An new speech recognition method based on prosodic analysis and svm in zhuang language, null. ; Janicki A. (2008), Rozpoznawanie stanu emocjonalnego mówcy z wykorzystaniem maszyny wektor ów wspierajcych svm, null. ; Shaukat A. (2011), Emotional state recognition from speech via soft-competition on different acoustic representations, null. ; Razak A. (null), Comparison between fuzzy and nn method for speech emotion recognition, null. ; Nwe T. (2003), Detection of stress and emotion in speech using traditional and fft based log energy features, null. ; Soltani K. (2007), Speech emotion detection based on neural networks, null. ; Gaurav M. (2008), Performance analysis of spectral and prosodic features and their fusion for emotion recognition in speech, null. ; <a target="_blank" href='http://www.exaudios.com/'>http://www.exaudios.com/</a> ; Scherer K. (2001), Emotion inferences from vocal expression correlate across languages and cultures, Journal of Cross-Cultural Psychology, 32, doi.org/10.1177/0022022101032001009 ; T. Zieliński, <i>Cyfrowe przetwarzanie sygnałów. Od teorii do zastosowań.</i>, October 2009., in Polish. ; <a target="_blank" href='http://www.msu.edu/course/'>http://www.msu.edu/course/</a> ; Narayanan S. (2009), Analysis of emotionally salient aspects of fundamental frequency for emotion detection, IEEE Transactions on audio, speech, and language processing. ; Basztura C. (1996), Komputerowe systemy diagnostyki akustycznej. ; K. Ślot, <i>Rozpoznawanie biometryczne.</i>, December 2010, in Polish.