Sometimes, yeah for sure. Definitely not all, the more I have listened. And I would say that I have kind of had to learn to listen to it, because it sounded wrong and weird at first. Absolutely. But over time, I’ve come to actively like certain aspects and sound qualities inherent to it.
One good example I saw a lot in Suno v4.5+ was what I’ve come to call the “AI warble” particularly more noticeable on male versus female generated vocals. Something about that model does women’s voices better than men’s to my ears.
But yeah there is def a crunchy sonic weirdness that sometimes veers into off-putting depending on the track. I always select against generations like that and hard delete anything I myself wouldn’t want to listen to. But I’ve also noticed that the sound quality of these songs varies tremendously between devices, formats, streaming platforms, and speaker quality. Flaws or embellishments you hear in one might be rendered very differently through listening in other contexts.
I think also what we coming from a prior “real” music background might hear today as “bad” will in a not a long time sound nostalgic as we look back on the AI tools and how they’ve progressed in the intervening time.
With generating AI artifacts for production and artistic purposes, I think there’s always a forward-facing consideration too. What might have “bad” sonic qualities in a tune today, you might be able to remaster it and get a new render using better systems in 2 or 5 or 10 years. So is what is perfectly good or bad right now really the best and only consideration? It’s a strong “it depends” on all this I guess…
Leave a Reply
You must be logged in to post a comment.