Taylor Swift’s New Hit: Synthetic Voice
Taylor Swift has a new anti-hero: synthetic voice. Fans are using AI tools to synthesize the global superstar’s voice, and the lines between reality and (fan) fiction are getting blurrier. Using tutorials posted to TikTok, Swifties are using a program to create realistic sound bites using Swift’s voice and then circulating them on social media without her permission. This is but one of many examples (including deep fakes) of how people are having their identities re-shaped through AI -- which underscores the need for responsible AI.
Swifties Define a New Role for Taylor Swift
As reported recently, Swifties are creating a fictional world in which Taylor Swift has a conversation with Kim Kardashian, insults people who cannot afford tickets to her concerts, and gives her fans pep talks. These conversations are all made up. Through a beta tool available on TikTok, people can create make-believe scenarios of Taylor Swift talking by uploading audio samples of her voice and manipulating it – a form of voice cloning.
This use of AI-generated voice clones can, unfortunately, also lead to the perversion of someone’s voice, for example by incorporating abusive language. For its part, TikTok has issued guidelines on how synthetic voice is to be used. The guidelines say, among other things:
We welcome the creativity that new artificial intelligence (AI) and other digital technologies may unlock. However, AI can make it more difficult to distinguish between fact and fiction, carrying both societal and individual risks. Synthetic or manipulated media that shows realistic scenes must be clearly disclosed. This can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’.
For a celebrity such as Taylor Swift, the use of synthetic voice can keep her brand visible especially as fans use AI to express their loyalty to her music. When synthetic voice is used in collaboration with the artist, both the artist and the content creator can benefit in other ways, too. (We recently blogged about such an example from the film industry when actor Val Kilmer’s voice was used synthetically in the movie Top Gun: Maverick.) But this is not so good when their voices are exploited for commercial gain without their permission, as has happened with Jay-Z. Unauthorized use of their voices can lead to a loss of income and a damage to their reputations.
Companies are using synthetic voice in many other ways well beyond the world of celebrity. For example, we also blogged about KFC re-creating the iconic voice of Colonel Sanders in a brilliant branding stunt. So, it’s important that businesses pay attention to how synthetic voice is being used in all industries, even if they do not work with celebrities.
AI Must Be Mindful
The unauthorized use of voice files for AI can create legal and reputational issues for any brand. We recommend that businesses be very careful about how they vet voice cloning services. Brands should insist that these services:
Define clear processes for how they source and use content, including safeguards against unauthorized use of voice and moderation guidelines for how voice is used. As we have seen with generative AI tools such as ChatGPT, AI can all-too-easily scrape content irresponsibly.
Define a protocol for catching an unauthorized use of a voice file. What fraud detection methodologies can social media and commercial voice cloning services use to catch content that is not permissible to use?
Businesses also need people in the loop to moderate the process of voice synthesis – from training an AI app to source voice files to managing their use and licensing.
At Centific, we use an approach known as Mindful AI to ensure that AI is used in a responsible, inclusive, and human-centered way. We incorporate Mindful AI in all the work we do, whether we are designing a voice-based application or a security program. Contact us to learn how we can use synthetic voice to benefit your business.