Spotify Takes Aim at AI Slop and Vocal Clones with New Platform Policies Spotify is rolling out a series of significant policy updates designed to tackle the growing issues of AI-generated music and spam on its service. The streaming giant is focusing on two main fronts, introducing new standards for AI disclosure in music credits and strengthening its defenses against spammy content like unauthorized vocal clones. In collaboration with the digital music standards organization DDEX, Spotify is helping to create an industry-wide framework for how AI use in music production should be documented. The goal is to move beyond a simple binary label. Instead of a track being flagged as either AI or not AI, the new system will encourage artists to provide more detailed information. They will be able to specify exactly which parts of their song involved AI assistance, such as AI-generated vocals, AI-created instrumentation, or AI-powered post-production tools. Alongside this push for transparency, Spotify is also launching a new impersonation policy. This policy is directly aimed at the problem of AI voice clones, where bad actors use generative AI to mimic the voice of a famous artist without their consent. The policy promises to offer artists stronger protections against this form of impersonation and provide a clearer path for recourse if their voice is cloned. In its announcement, Spotify acknowledged the double-edged nature of AI technology. The company stated that while AI can unlock incredible new creative tools for artists and discovery methods for listeners, it also presents serious risks. At its worst, AI can be used by content farms and bad actors to confuse listeners, flood the ecosystem with low-quality slop, and harm the careers of authentic artists working to build an audience. Vocal clones are not the only tactic used to game the system. Spotify highlighted other forms of spam that have become easier to produce with AI tools. These include mass uploads of tracks, duplicate songs, SEO hacks to manipulate search results, and artificially short tracks designed to generate royalties from minimal listening time. The company says these practices dilute the royalty pool, ultimately taking money and attention away from legitimate artists. To combat this, Spotify is launching a new spam filter later this fall. This system will proactively identify uploads that engage in these spammy behaviors. Once detected, these tracks will be tagged on the platform and their recommendation will be suppressed, meaning they will not be pushed to users through algorithmic playlists like Discover Weekly. Spotify revealed that it has already been aggressive in this area, having removed more than 75 million spammy tracks over the last year. The new policies are framed as part of Spotifys broader mission to ensure more transparency for listeners and to protect the identity and earnings of artists. However, these rules do not appear to target all AI-generated music. Projects like the fully AI-generated band The Velvet Sundown, which uses AI for its lyrics, vocals, and imagery, remain available on the platform. Spotify did not comment on specific acts but emphasized its stance is to support artists freedom to use AI creatively while actively combating its misuse by content farms and bad actors. The companys approach seems to be focusing on deceptive practices and spam rather than banning AI-assisted music creation outright.


