Abstract: Swimming with the tide of deep learning, the field of music information retrieval (MIR) experiences parallel development and a sheer variety of feature-learning models has been applied to music classification and tagging tasks. Among those learning techniques, the deep convolutional neural networks (CNNs) have been widespreadly used with better performance than the traditional approach especially in music genre classification and prediction. However, regarding the music recommendation, there is a large semantic gap between the corresponding audio genres and the various aspects of a song that influence user preference. In our study, aiming to bridge the gap, we strive to construct an automatic music aesthetic annotation model with MIDI format for better comparison and measurement of the similarity between music pieces in the way of harmonic analysis. We use the matrix of qualification converted from MIDI files as input to train two different classifiers, support vector machine (SVM) and Decision Tree (DT). Experimental results in performance of a tag prediction task have shown that both learning algorithms are capable of extracting high-level properties in an end-to end manner from music information. The proposed model is helpful to learn the audience taste and then the resulting recommendations are likely to appeal to a niche consumer.
Abstract: In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.
Abstract: Recently many research has been conducted to
retrieve pertinent parameters and adequate models for automatic
music genre classification. In this paper, two measures based upon
information theory concepts are investigated for mapping the features
space to decision space. A Gaussian Mixture Model (GMM) is used
as a baseline and reference system. Various strategies are proposed
for training and testing sessions with matched or mismatched
conditions, long training and long testing, long training and short
testing. For all experiments, the file sections used for testing are
never been used during training. With matched conditions all
examined measures yield the best and similar scores (almost 100%).
With mismatched conditions, the proposed measures yield better
scores than the GMM baseline system, especially for the short testing
case. It is also observed that the average discrimination information
measure is most appropriate for music category classifications and on
the other hand the divergence measure is more suitable for music
subcategory classifications.