説明
Due to the increased demand for music streaming/recommender services and the recent developments of music information retrieval frameworks, Music Genre Classification (MGC) has attracted the community's attention. However, convolutional-based approaches are known to lack the ability to efficiently encode and localize temporal features. In this paper, we study the broadcast-based neural networks aiming to improve the localization and generalizability under a small set of parameters (about 180k) and investigate twelve variants of broadcast networks discussing the effect of block configuration, pooling method, activation function, normalization mechanism, label smoothing, channel interdependency, LSTM block inclusion, and variants of inception schemes. Our computational experiments using relevant datasets such as GTZAN, Extended Ballroom, HOMBURG, and Free Music Archive (FMA) show state-of-the-art classification accuracies in Music Genre Classification. Our approach offers insights and the potential to enable compact and generalizable broadcast networks for music and audio classification.
accepted for oral presentation at the World Congress on Computational Intelligence (WCCI 2022) - International Joint Conference on Neural Networks (IJCNN 2022)
収録刊行物
-
- 2022 International Joint Conference on Neural Networks (IJCNN)
-
2022 International Joint Conference on Neural Networks (IJCNN) 1-8, 2022-07-18
IEEE
- Tweet
キーワード
- Signal Processing (eess.SP)
- FOS: Computer and information sciences
- Sound (cs.SD)
- Computer Science - Artificial Intelligence
- Computer Science - Sound
- Multimedia (cs.MM)
- Artificial Intelligence (cs.AI)
- Audio and Speech Processing (eess.AS)
- FOS: Electrical engineering, electronic engineering, information engineering
- Electrical Engineering and Systems Science - Signal Processing
- Computer Science - Multimedia
- Electrical Engineering and Systems Science - Audio and Speech Processing
詳細情報 詳細情報について
-
- CRID
- 1873398392720265984
-
- データソース種別
-
- OpenAIRE