The rise of artificial music

Listening to Hardfork recently, the team were discussing the rise and implications for AI generated music that we now see on platforms like Spotify.

The argument was that this is not only another avenue of lost jobs to automation, but also signals a decline in music quality, artistry and integrity.

Popular ambient playlists like those used for relaxing, studying and deep work are a prime example of the trend towards AI enabled tracks composed to match the style of those that more traditionally qualify for inclusion.

While I do absolutely sympathise with artists in this arena, as a Product Manager it would be a very tough call to make that human generated music is inherently better, particularly in the ambient playlist example.

Where it may, at least in the near term, be a loss for perhaps the country music genre to have AI generated lyrics relating to experiences never felt by non human entities, the same cannot be so easily argued for ambient sound. Of course there are mistakes and imperfections, but the same is true in traditional music production too, in fact it’s often argued that this is part of the charm.

The computer science community has long since used methods such as the Turing Test to determine whether humans can tell whether they are interacting with machine learning technology and in this case too I believe it may be the only meaningful way to evaluate whether it matters that tracks are AI rather than human composed.

If humans are consistently able to tell the difference and prefer human created music as a result, then we can collectively deem this method of musical production ‘better’ than its AI alternatives. However, if the criteria for this terminology is more philosophical, then it becomes less about quality, artistry and integrity and more about the parameters we place on AI in order to safeguard areas of life that we want to remain distinctly human.