During the Microsoft Ignite 2023 event, the company launched its Azure AI Speech text-to-speech avatar creator. This tool is currently available for public preview and its main function is to create deep fakes of people for use. Microsoft also shows off how users can benefit from this new tool that they are adding to their list of AI-powered tools.
This new deep fake creator lets users make videos of people talking by just making a script and uploading the person’s image. After doing this, the AI will then work on the image provided and make the person say the words in the script. Two AI models are working together to make this new feature a reality.
One focuses on the animation, while the other turns its attention to the text-to-speech side of things. According to Microsoft, this new tool will be beneficial for businesses as they will be able to “build training videos, product introductions, customer testimonials [and so on] simply with text input.” Businesses looking to create realistic virtual assistants or chatbots can turn to this new feature for assistance.
While the Microsoft Azure AI Speech text-to-speech avatar looks impressive, it’s very concerning
Over the past few months, there have been complaints about the abuse of AI across the world. Scammers and bad actors have turned to these tools to make their jobs easier for them. Without necessary regulations, these have collectively been able to make away with millions of unsuspecting users.
Just a few days ago, an AI model was used to copy the voice of a popular actress for an ad she knew nothing about. A few months ago, an AI model helped generate music under the name of popular singers. These are just some cases that show the abuse of various AI tools that are currently available for use.
Now, nobody is saying that the Microsoft Azure AI Speech text-to-speech avatar is here to make scamming easier. But can this tool be accessible to these scammers and bad actors who will abuse its intended usage? The answer to that question is a resounding yes, so there need to be strong regulations for the usage of this new AI tool from Microsoft.
Microsoft is aware of the risks that this new AI tool can pose to the public if it gets into the hands of bad actors. For this reason, businesses or other users need “explicit written permission” to use this feature. At the end of the video, Microsoft also requires a disclosure which clearly states that the video is AI-generated.
These two measures might ensure that scammers and bad actors don’t abuse this new feature. Businesses and users can now try out this new feature to envision how they can put it to good use. In the coming months, netizens will get to see this new feature put to use in businesses and other areas as well.