Concluding remarks#
It’s been a great team effort and a lot of work (most of it right at the last minute!) to put this tutorial together. Our sincere hope is that the content triggers people into looking at adapting models more and more to culturally specific music genres, especially in Latin America.
In particular, we hope that the practical code examples provide the means for anyone to build, train, and test their own systems (with their own annotated data!) and hopefully extend these ideas to MIR topics other than beat tracking.
Note
There are still open fun challenges to work on
Data selection: when working with a small annotation budget, what data should we annotate to maximize our efforts?
Model Generalization: How can we balance model specialization for a specific genre while maintaining some degree of generalization for related music traditions?
Low-Resource Data Augmentation: What innovative augmentation techniques can be developed for rhythmically complex or timbrally diverse genres to improve model training with minimal data?
Live or noisy conditions: Many of these music genres happen live and recordings are nothing like studio recordings. How do we make these tools more robust to that?
Exploring Other MIR Tasks: Beyond beat tracking, how can we adapt these techniques to other MIR tasks such as source separation, chord recognition, or melody estimation for culturally specific music genres?