Jason Hockman

Associate Professor
School of Computing and Digital Technology - DMT
- Email:
- Jason.Hockman@bcu.ac.uk
- Phone:
- 01212022386
Jason Hockman is Associate Professor of Audio Engineering at Birmingham City University. He is a member of the Digital Media Technology Laboratory (DMT Lab), in which he leads the Sound and Music (SoMA) Group for computational analysis of sound and music and digital audio processing.
Jason's expertise is in music informatics, machine listening and computational musicology—topics in which he lectures and has published nearly 50 papers, journals, and conference proceedings. His research has focused on a variety of aspects related to computational rhythm and metre detection, music transcription, and content-based audio effects. Jason has received a Bachelor’s degree in Sociology from Cornell University (USA, 2000), a Masters in Music Technology from New York University (USA, 2007), and a PhD in Music Research from McGill University (Canada, 2014).
Since joining Birmingham City University in 2015, Jason has collaborated with several international institutions, most recently including AIST (Japan), International Audio Laboratories Erlangen (Germany), INESC TEC (Portugal) as well as industry partners, such as the Arup Group. Jason has served as co-chair for international conferences and workshops on music, perception and digital signal processing. As an electronic musician, Jason has had several critically-acclaimed releases on a variety of established international record labels including his own Detuned Transmissions imprint.
Current Activity
- Research
- Teaching
Areas of Expertise
-
• Music information retrieval
- • Machine listening
- • Rhythm analysis
- • Drum transcription
- • Onset, beat and metre detection
- • Digital signal processing
- • Machine learning
- • Computational musicology
- • Interactive music systems
Qualifications
-
• Bachelors Sociology (Cornell University, USA)
- • Masters of Music Technology (New York University, USA)
- • PhD Music Research (McGill University, Canada)
Teaching
-
• DIG4157: Digital audio fundamentals
- • DIG4161: Sound for film
- • DIG6109: Music information retrieval
- • DIG6111: New Interfaces for musical expression
Research
Jason Hockman is a member of the Digital Media Technology Laboratory (DMT Lab), where he leads the Sound and Music (SoMA) Group research in computational analysis of sound and music and digital audio processing. His research has focused on a variety of aspects related to computational rhythm including onset, beat and downbeat detection, music transcription, and audio effects. His Masters thesis (New York University) focused on automated timbral transformations of percussion loops, and his doctoral dissertation involved ethnographic and technological research into the UK’s electronic music history. He is currently working in the areas of music transcription, content-aware audio effects, synthesiser and audio effect preset recommendation, virtual music assistants and automatic remixing.
Postgraduate Supervision
Postgraduate supervision (present)
• Maciej Tomczak (DoS)
• Jake Drysdale (DoS)
• Niccolo Graniere (2nd)
• Balandino Di Donato (2nd)
Postgraduate supervision (past)
• Dr Islah Ali-Machlachlan (DoS, 2015–19)
• Dr Carl Southall (DoS, 2015–20)
• Dr Spyridon Stasis (2nd, 2015–18)
Publications
Drysdale, J. and Tomczak, M. and J. Hockman. 2020. Adversarial synthesis of drum sounds. In Proceedings of the 23nd International Conference on Digital Audio Effects, Vienna, Austria.
Cheshire, M., R. Stables and J. Hockman. 2020. Investigating timbral differences of varied velocity snare drum strikes. In Proceedings of the 147th Convention of the Audio Engineering Society (accepted), New York City, New York, USA.
Tomczak, M., J. Drysdale and J. Hockman. 2019. Drum translation for timbral and rhythmic transformation. In Proceedings of the 22nd International Conference on Digital Audio Effects, Birmingham, UK.
Southall, C., R. Stables and J. Hockman. 2019. Trainable data manipulation with unobserved instruments. In Proceedings of the 5th Workshop on Intelligent Music Production, Birmingham, UK.
Cheshire, M., R. Stables and J. Hockman. 2019. Microphone comparison: Spectral feature mapping for snare drum recording. In Proceedings of the 147th Convention of the Audio Engineering Society, New York City, New York, USA.
Wu, C.-W., C. Dittmar, C. Southall, R. Vogl, G. Widmer, J. Hockman, M. Müller and A. Lerch. 2018. A review of automatic drum transcription. IEEE/ACM Transactions on Audio Speech and Language Processing. Volume 26, Issue 9, p. 1457–1483.
Hockman, J. and J. Thibodeau. 2018. Games Without Frontiers: Audio Games for Music Production and Performance. Book Chapter in: Aramaki M., Davies M., Kronland-Martinet R., Ystad S. (editors): Music Technology with Swing. Lecture Notes in Computer Science, vol. 11265, Springer-Verlag.
López-Serrano, P., M.E.P. Davies, J. Hockman, C. Dittmar and M. Müller. 2018. Break-informed audio decomposition for interactive redrumming. In Proceedings of the International Society of Music Information Retrieval Conference, Paris, France.
Cheshire, M., J. Hockman and R. Stables. 2018. Microphone comparison for snare drum recording. In Proceedings of the 145th Convention of the Audio Engineering Society, New York City, New York, USA.
Tomczak, M., C. Southall and J. Hockman. 2018. Audio style transfer with rhythmic constraints. In Proceedings of the 21st International on Digital Audio Effects, Porto, Portugal.
Southall, C., R. Stables and J. Hockman. 2018. Improving peak picking using multiple time-step loss functions. In Proceedings of the International Society of Music Information Retrieval Conference, Paris, France.
Southall, C., R. Stables and J. Hockman. 2018. Player vs Transcriber: A Game approach to automatic music transcription. In Proceedings of the International Society of Music Information Retrieval Conference, Paris, France.
Michailidis, T. and J. Hockman. 2018. Affordances of vibrations in performances and composition. In Proceedings of the International Conference of Sound and Music Computing, Limassol, Cyprus.
Ali-MacLachlan, I., M. Tomczak, C. Southall and J. Hockman. 2018. Player recognition for traditional Irish flute recordings. In Proceedings of the 8th International Workshop on Folk Music Analysis, Birmingham, UK.
Southall, C., C.-W. Wu, A. Lerch and J. Hockman. 2017. MDB Drums: An annotated subset of MedleyDB for automatic drum transcription. In Proceedings of the International Society for Music Information Retrieval Conference, Suzhou, China.
Southall, C., N. Jillings, R. Stables and J. Hockman. 2017. ADTWeb: An open-source browser based automatic drum transcription system. In Proceedings of the International Society for Music Information Retrieval Conference, Suzhou, China.
Hockman, J. and J. Thibodeau. 2017. Games without frontiers: Audio games for music production and performance. In Proceedings of the 13th International Symposium on Computer Music Multidisciplinary Research, Matosinhos, Portugal.
Southall, C., R. Stables and J. Hockman. 2017. Automatic drum transcription for polyphonic recordings using soft attention mechanisms and convolutional neural networks. In Proceedings of the18th International Society of Music Information Retrieval Conference, Suzhou, China.
Ali-MacLachlan, I., M. Tomczak, C. Southall and J. Hockman. 2017. Improved onset detection for traditional flute recordings using convolutional neural networks. The 7th International Workshop on Folk Music Analysis, Málaga, Spain.
Di Donato, B., J. Dooley, J. Hockman, J. Bullock and S. Hall. 2017. MyoSpat: A hand-gesture controlled system for sound and light projection manipulation. In Proceedings of the 2017 International Computer Music Conference, Shanghai, China.
Stasis, S., J. Hockman and R. Stables. 2017. Navigating descriptive sub-representations of musical timbre. In Proceedings of the Conference for New Interfaces for Musical Expression, Copenhagen, Denmark.
Southall, C., R. Stables and J. Hockman. 2016. Automatic drum transcription using bi-directional recurrent neural networks. In Proceedings of the International Society of Music Information Retrieval Conference, New York City, New York, USA.
Ali-MacLachlan, I., M. Tomczak, C. Southall and J. Hockman. 2016. Note, cut and strike detection for traditional Irish flute recordings. 2016. In Proceedings of the 6th International Workshop on Folk Music Analysis, Dublin, Ireland.
Stasis, S., R. Stables and J. Hockman. 2016. Semantically controlled adaptive equalization in reduced dimensionality parameter space. Applied Sciences 6(4): 116–34.
Stasis, S., J. Hockman and R. Stables. 2016. Descriptor sub-representations in semantic equalisation. In Proceedings of the AES 2nd Workshop on Intelligent Music Production, London, United Kingdom.
Hockman, J.A. and M.E.P. Davies. 2015. Computational strategies for breakbeat classification and resequencing in hardcore, jungle and drum & bass. 18th International Conference on Digital Audio Effects, Trondheim, Norway.
Stasis, S., R. Stables and J. Hockman. 2015. A model for adaptive reduced-dimensionality equalisation. 18th International Conference on Digital Audio Effects, Trondheim, Norway (Best paper, 2nd prize).
Böck, S., M.E.P. Davies and J. Hockman, A. Holzapfel and F. Krebs. 2014. MIREX Audio downbeat estimation task. Technical Report.
Devaney, J., J.A. Hockman, J. Wild and I. Fujinaga. 2013. Diatonic semitone tuning in two-part singing. In Proceedings of the 2013 Society of Music Perception and Cognition Conference, Toronto, Canada.
Weigl, D., D. Sears, J.A. Hockman, S. McAdams and C. Guastavino. 2013. Investigating the effects of beat salience on beat synchronization judgments during music listening. In Proceedings of the 2013 Society of Music Perception and Cognition Conference, Toronto, Canada.
Hockman, J.A., M.E.P. Davies and I. Fujinaga. 2012. One in the jungle: Downbeat detection in hardcore, jungle, and drum and bass. In Proceedings of the International Society of Music Information Retrieval Conference, Porto, Portugal.
Hockman, J.A., D.M. Weigl, C. Guastavino and I. Fujinaga. 2011. Discrimination between phonograph playback systems. In Proceedings of the 131st Convention of the Audio Engineering Society, New York City, New York, USA.
Hockman, J.A. and I. Fujinaga. 2010. Fast vs. slow: Learning tempo octaves from user data. In Proceedings of the International Society of Music Information Retrieval Conference, Utrecht, Netherlands.
McKay, C., J.A. Burgoyne, J.A. Hockman, J. Smith, G. Vigliensoni and I. Fujinaga. 2010. Evaluating the performance of lyrical features relative to and in combination with audio, symbolic and cultural features. In Proceedings of the International Conference on Music Information Retrieval, Utrecht, Netherlands.
Li, Z., Q. Xiang, J.A. Hockman, J. Yang, Y. Yi, I. Fujinaga and Y. Wang. 2010. A music search engine for therapeutic gait training. In Proceedings of ACM Multimedia 2010, Florence, Italy.
Hockman, J.A., M.M. Wanderley and I. Fujinaga. 2009. Phase vocoder manipulation by runner’s pace. In Proceedings of the Conference for New Interfaces for Musical Expression, Pittsburg, Pennsylvania, USA.
Hockman, J.A., M.E.P. Davies, J. Bello and M. Plumbley. 2008. Automated rhythmic transformation of musical audio. In Proceedings of the 11th International Conference on Digital Audio Effects, Espoo, Finland.
Pugin, L., J.A. Hockman, J.A. Burgoyne and I. Fujinaga. 2008. Gamera versus Aruspix: two optical music recognition approaches. In Proceedings of the International Conference on Music Information Retrieval. Philadelphia, Pennsylvania, USA.
Dissertations, Theses:
Hockman, J.A. 2014. An ethnographic and technological study of breakbeats in Hardcore, Jungle, and Drum & Bass. Doctoral dissertation, McGill University, Canada.
Work With Industry
Own and operate internationally-distributed electronic music record label
Links and Social Media
Detuned Transmissions
• Website: http://detunedtransmissions.com/
• Soundcloud: https://soundcloud.com/detuned-transmissions
• Facebook: https://www.facebook.com/detunedtransmissions/
DAAT
• Facebook: https://www.facebook.com/daatmusic/