Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 6
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

This research determines an identification system for the types of Beiguan music – a historical, nonclassical music genre – by combining artificial neural network (ANN), social tagging, and music information retrieval (MIR). Based on the strategy of social tagging, the procedure of this research includes: evaluating the qualifying features of 48 Beiguan music recordings, quantifying 11 music indexes representing tempo and instrumental features, feeding these sets of quantized data into a three-layered ANN, and executing three rounds of testing, with each round containing 30 times of identification. The result of ANN testing reaches a satisfying correctness (97% overall) on classifying three types of Beiguan music. The purpose of this research is to provide a general attesting method, which can identify diversities within the selected non-classical music genre, Beiguan. The research also quantifies significant musical indexes, which can be effectively identified. The advantages of this method include improving data processing efficiency, fast MIR, and evoking possible musical connections from the high-relation result of statistical analyses.
Go to article

Bibliography

1. Briot J.-P., Hadjeres G., Pachet F.-D. (2019), Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, arXiv: 1709.01620.
2. Hagan M.T., Demuth H.B., Beale M. (2002), Neural Network Design, CITIC Publishing House, Beijing.
3. Lamere P. (2008), Social tagging and music information retrieval, Journal of New Music Research, 37(2): 101–114, doi: 10.1080/09298210802479284.
4. Lu C.-K. (2011), Beiguan Music, Taichung, Taiwan: Morningstar.
5. Pan J.-T. (2019), The transmission of Beiguan in higher education in Taiwan: A case study of the teaching of Beiguan in the department of traditional music of Taipei National University of the Arts [in Chinese], Journal of Chinese Ritual, Theatre and Folklore, 2019.3(203): 111–162.
6. Rosner A., Schuller B., Kostek B. (2014), Classification of music genres based on music separation into harmonic and drum components, Archives of Acoustics, 39(4): 629–638, doi: 10.2478/aoa-2014-0068.
7. Tzanetakis G., Kapur A., Scholoss W.A., Wright M. (2007), Computational ethnomusicology, Journal of Interdisciplinary Music Studies, 1(2): 1–24.
8. Wiering F., de Nooijer J., Volk A., Tabachneck- Schijf H.J.M. (2009), Cognition-based segmentation for music information retrieval systems, Journal of New Music Research, 38(2): 139–154, doi: 10.1080/09298210903171145.
9. Yao S.-N., Collins T., Liang C. (2017), Head-related transfer function selection using neural networks, Archives of Acoustics, 42(3): 365–373, doi: 10.1515/aoa-2017-0038.
10. Yeh N. (1988), Nanguan music repertoire: categories, notation, and performance practice, Asian Music, 19(2): 31–70, doi: 10.2307/833866.
Go to article

Authors and Affiliations

Yu-Hsin Chang
1
Shu-Nung Yao
2

  1. Department of Music, Tainan National University of the Arts, No. 66, Daqi, Guantian Dist., Tainan City 72045, Taiwan
  2. Department of Electrical Engineering, National Taipei University, No. 151, University Rd., Sanxia District, New Taipei City 237303, Taiwan
Download PDF Download RIS Download Bibtex

Abstract

This paper presents a relationship between Auditory Display (AD) and the domains of music and acoustics. First, some basic notions of the Auditory Display area are shortly outlined. Then, the research trends and system solutions within the fields of music technology, music information retrieval and music recommendation and acoustics that are within the scope of AD are discussed. Finally, an example of AD solution based on gaze tracking that may facilitate music annotation process is shown. The paper concludes with a few remarks about directions for further research in the domains discussed.

Go to article

Authors and Affiliations

Bożena Kostek
Download PDF Download RIS Download Bibtex

Abstract

In the paper, various approaches to automatic music audio summarization are discussed. The project described in detail, is the realization of a method for extracting a music thumbnail - a fragment of continuous music of a given duration time that is most similar to the entire music piece. The results of subjective assessment of the thumbnail choice are presented, where four parameters have been taken into account: clarity (representation of the essence of the piece of music), conciseness (the motifs are not repeated in the summary), coherence of music structure, and overall quality of summary usefulness.

Go to article

Authors and Affiliations

Jakub Głaczyński
Ewa Łukasik
Download PDF Download RIS Download Bibtex

Abstract

The paper presents the key-finding algorithm based on the music signature concept. The proposed music signature is a set of 2-D vectors which can be treated as a compressed form of representation of a musical content in the 2-D space. Each vector represents different pitch class. Its direction is determined by the position of the corresponding major key in the circle of fifths. The length of each vector reflects the multiplicity (i.e. number of occurrences) of the pitch class in a musical piece or its fragment. The paper presents the theoretical background, examples explaining the essence of the idea and the results of the conducted tests which confirm the effectiveness of the proposed algorithm for finding the key based on the analysis of the music signature. The developed method was compared with the key-finding algorithms using Krumhansl-Kessler, Temperley and Albrecht-Shanahan profiles. The experiments were performed on the set of Bach preludes, Bach fugues and Chopin preludes.

Go to article

Authors and Affiliations

Dariusz Kania
Paulina Kania
Download PDF Download RIS Download Bibtex

Abstract

This article presents a study on music genre classification based on music separation into harmonic and drum components. For this purpose, audio signal separation is executed to extend the overall vector of parameters by new descriptors extracted from harmonic and/or drum music content. The study is performed using the ISMIS database of music files represented by vectors of parameters containing music features. The Support Vector Machine (SVM) classifier and co-training method adapted for the standard SVM are involved in genre classification. Also, some additional experiments are performed using reduced feature vectors, which improved the overall result. Finally, results and conclusions drawn from the study are presented, and suggestions for further work are outlined.
Go to article

Authors and Affiliations

Aldona Rosner
Bożena Kostek
Bjӧrn Schuller
Download PDF Download RIS Download Bibtex

Abstract

Due to an increasing amount of music being made available in digital form in the Internet, an automatic organization of music is sought. The paper presents an approach to graphical representation of mood of songs based on Self-Organizing Maps. Parameters describing mood of music are proposed and calculated and then analyzed employing correlation with mood dimensions based on the Multidimensional Scaling. A map is created in which music excerpts with similar mood are organized next to each other on the two-dimensional display.
Go to article

Authors and Affiliations

Magdalena Plewa
Bożena Kostek

This page uses 'cookies'. Learn more