Researchers within the Digital Media Technology centre have produced a state-of-the art music transcription system that has been evidenced in multiple papers and community-led evaluations.
- Jason Hockman
- Ryan Stables
- Carl Southall
Automatic music transcription (AMT) is the annotation of pitch and duration information from acoustic signals, and is crucial to several fields of study (e.g., musicology, information retrieval).
A key motivation in this research is to develop tools for extracting the rhythm embedded within musical audio signals. We set out to investigate if we could improve the field through the incorporation of deep learning systems tailored specifically for the task, and we were the first and most successful in this endeavour.
How has the research been carried out?
Over four years, we developed systems for music transcription and improved upon these through iterative development of the algorithms, training procedures/optimisation criteria and datasets. The findings are catalogued through over 10 papers from high-impact conferences and journals.
- Achieved state-of-the-art results in the Music Information Retrieve evaluation eXchange (MIREX) community evaluation (evidenced by results on MIREX webpage).
- Achieved best results in a community led comparison of all current drum transcription models in IEEE TALSP journal paper.
- Consulted by Google Brain group in the development of their music transcription system (evidenced by email communications).
- Development of evaluation strategies for music transcription systems now used by community (evidenced by citations).
- New datasets for task used by community (evidenced by citations).