Cookies and Privacy

The University uses cookies on this website to provide the best experience possible including delivering personalised content on this website, other websites and social media. By continuing to use the site you agree to this, or your can go to our cookie policy to learn more and manage your settings.

Intelligent Music Production and Automated Music Transformations

Simplifying complex music production processes and making them more accessible, bridging the gap between musicians and technology.

Intelligent music production looks to simplify typically complex processes.

Researchers

Research background

This is an ongoing area of research begun in two separate streams by:

  • Ryan Stables, which led to the creation of the Semantic Labs and Faders.io spinout from BCU. 
  • Jason Hockman, begun through content-aware effects (2008), developed through joint effort of three institutions (McGill University (CA), New York University (USA) and Queen Mary University (UK)) which were embedded in a variety of applications and plugins (e.g., LickMixer).

Research aims

Intelligent music production and automated audio transformations intend to simplify complex music production processes, making them more accessible, thus bridging the gap between musicians and technology.

To do this, we consider interfaces of reduced complexity and semantic control of low-level music production parameters. We approach this problem using a variety of techniques, including machine learning, adaptive audio processing and natural language processing.

Example applications include content-aware audio effects, synthesiser and audio effect preset recommendation, virtual music assistants and automatic remixing.

How has the research been carried out?

Various projects utilised different methods, however for the most part, there has been a marked increase in our reliance on cutting-edge deep learning methods.

We steer these towards two key areas: parameter estimation for pre-existing musical effects (e.g., work by Stasis et al.) and sound and rhythm generation (e.g., work by Tomczak et al. and Drysdale et al.). Please see associated publications for further information.

Research outcomes

  • 20+ publications in high-impact conferences and journals with numerous citations for all work on a variety of projects.
  • Semantic Labs spinout and development of Faders.io by Ryan Stables and others.
  • Best paper award (2nd) in premiere signal processing conference for audio production software (evidenced on DAFx 2015 website).
  • References in cutting edge audio plugins such as flowEQ (evidenced on website: https://floweq.ml/), and new products utilising intelligent music production tools (e.g., iZotope).