“VTGO (Vertigo)” is a release from Kemi Sulola, Harriet Raynor, and Rebel Algorithms.
The song placed 3rd in the International 2023 AI Song Contest, selected from an initial collection of 35 entries.
You are welcome to listen to the song on your platform of choice:
- YouTube
- Spotify
- Apple Music
- YouTube Music
- VR (for best experience, use Meta Quest 2)
Neural audio
This year, we made extensive use of AI in the form of “neural audio”, which is when an artificial neural network is trained on and capable of generating waveform audio or spectrograms.
The model we used the most in this respect is an in-house one called NoiseBandNet by Adrián Barahona-Ríos and Tom Collins. Here are a few examples of it in action:
Ex. | Training | Target | Inference | In context |
---|---|---|---|---|
2* | N/A | |||
3 | ||||
4 | ||||
5* | N/A |
*Generated sample did not make it into the final version of the song.
For more details, see this poster.
Team
The team is or has been affiliated with the Music Computing and Psychology Lab in the Frost School of Music at University of Miami, which is one of the best places in the world to study music in all its diverse forms.
Collaboration members in pseudo-random order:
- Kemi Sulola is a London-born, independent singer-songwriter.
- Harriet Raynor is a music producer and sound engineer, based in Sheffield, and undergraduate on Music and Sound Recording at University of York.
- Mark Hanslip works on AI for music analysis and generation. He trained text- and audio-based neural networks for this year's task.
- Kyle Worrall is a PhD student (Computer Science) at University of York, on the Intelligent Games and Games Intelligence (IGGI) programme.
- Adrián Barahona-Ríos is a PhD student (Computer Science) at University of York, on the Intelligent Games and Games Intelligence (IGGI) programme. Following submission of his thesis and awaiting viva, Adrián has started a job at Sony Interactive Entertainment.
- Tom Collins put the team together, and helped with implementation and use of various AI algorithms. Tom has a new open access book coming out called “Coding music and audio for the web: Empowerment through programming”. Register your interest here!
- Chenyu Gao is a PhD student (Music) at University of York.
Interview
with Kemi Sulola, by Isabel Jackson
Artificial Intelligence (AI) is one of the most rapidly developing technologies of today, and its latest feat is being used in the world of music. Artists are navigating using this new technology in their creative process and exploring to what extent AI can be used in creating and transforming music. Musicians are defining the future of music and technology’s place in it, as questions are being raised by artists and audiences as to what extent AI can and should be used for music.
London-based singer-songwriter and creative, Kemi Sulola, has recently been exploring using AI technology in her music-making process. Passionate about RnB, Soul, Electronic and Dance music in particular, Kemi has been using various AI generative models on Cocreate, playing with different genres and sounds. Cocreate is a platform for creators and developers, to pioneer this new technology, and I spoke to Kemi about her thoughts and experiences using the website in her music.
Kemi has been working on soon-to-be released music using various models, “my favourite is the timbre transfer, it allows you to make changes to the sound of something, so I could sing something but make it sound like a violin, and put it into a song, for example,” she explains – you can “put some inputs in, and it will create some music for you.”
[AI] can spark your creativity
When discussing the impact that the AI models have on her creative process, Kemi highlighted both positive and negative aspects; “it’s so random and can come up with anything, so sometimes that can spark your own creativity because I never would have thought of that”. But at the same time “if I have an idea first, I want to find things that fit around that [idea] and the AI might come up with something completely random that doesn’t fit” she said.
Kemi also expressed how much the genre, and style, of the music can influence her experience using AI models, with some genres lending themselves better than others to the technology. “[Soul and RnB], that type of music, is very emotion filled and very people filled, and with AI you’re not getting that human interaction and that kind of emotional level of music gets taken out.”
“But I think for certain genres, for example if I’ve got a house track, maybe I’d want to use AI because it would give me some cool random stuff,” and platforms such as Cocreate then come into the creative process.
“I’ve been trying to find [models] that work in line with the ideas that I have,” and for Kemi, as well as other artists, it’s clear that the human influence is still central to the process, and as such, things like AI should be embraced.
People shouldn’t be scared of it
Rapid technological advancements always tend to attract controversy, especially in the media and many musicians are concerned about using this technology, as well as its potential. “AI is modelled off of humans [and] it does still need human interaction,” Kemi said. “But I don’t think it could be a replacement, and people shouldn’t be scared of it.”
But Kemi did acknowledge the need for tighter regulation around AI, especially in music, “there does need to be certain rules, just to protect the music and people’s rights to what they create.” Essentially protecting creators and musicians at the heart of using this technology.
“At the end of the day, being an artist or musician, there is a skill to it, there’s a talent to it” but “using AI and making something good with it, there is also a skill to that as well,” Kemi explains.
But this can be a double-edged sword, and there are fears that AI could threaten the value of music in the future; “I am for the quality of music, I would obviously prefer someone to study the piano, and learn the chords, than just press a button.”
“It might be that more people are pressing the button, and less people are studying the art so that is a danger.” Despite the skill involved in creating music using AI generative models, artists like Kemi Sulola, clearly have concerns about the future of using AI.
It is a great tool and it can expand us and our creative ideas
But Kemi puts any concerns aside, and thinks artists should embrace these new tools for creating music, and that platforms like Cocreate can be really useful for supplementing more traditional methods. “It is a great tool and it can expand us and our creative ideas, and I think that’s the great thing about it.”
“For the moment, I’ve enjoyed it, but I will always want to work with other musicians,” but “it’s going to evolve and change [and] if you’re not willing to be open, then you are going to miss out.”
Do what you do and find the people that love it
In Kemi’s opinion, “music is a feeling ultimately,” and “the AI is so intelligent that it could mock that to a degree,” but “music is not just in the listening […], it’s an experience, so if you go to a concert you’re not going to want to hear a jazz song on a computer.”
Whether you embrace using AI, or not, Kemi feels that “there will always be those people that will love what you’re doing, as long as you’re true to that and expressing that.” Music is, and always has been so individual, and Cocreate opens new doors artists to be creative, and for Kemi, regardless, it’s important to “do what you do and find the people that love it.”