Users will be able to enter key words such as “warm”, “crunchy” or “dreamy” to process sounds rather than technical parameters thanks to new software.
You will also be able to label your created sounds under key words, over time allowing a whole series of sounds to be grouped together and further strengthening the searches that musicians make when searching for specific types of sounds.
The software, developed by researchers at Birmingham City University, trains computers to understand the language of musicians when applying effects to their music.
The software – the SAFE Project – uses artificial intelligence to allow a computer to perceive sounds like a human being. The development of the software was motivated by the lack of statistically-defined meaningful words in music production.
The software helps to computationally define words such as “dreamy”, so computers can easily find a sound or set of sounds which should match with what a “dreamy” sound would be considered as.
It aims to reduce the long periods of training and expensive equipment required to make music, while also giving musicians more intuitive control over the music that they produce.
Speaking at the British Science Festival in Birmingham, Dr Ryan Stables, lecturer in audio engineering and acoustics at Birmingham City University and lead researcher on the SAFE project, said: “When we started the project, we were really keen to try and simplify the whole process of music production for those who were untrained in the area.
“Musicians can often spend their whole lives mastering their instrument, but then when they come to the production stage, it’s very difficult for them to produce a well-recorded piece of music. The SAFE project aims to overcome this and gives musicians and music production novices the ability to be creative with their music.”
The software is available to download at ryanstables.co.uk