Earlier this week, malicious users managed to create fake AI-generated explicit photos and videos of pop music sensation Taylor Swift using Microsoft’s Azure Face API. Microsoft has now announced that it has fixed the security flaw that allowed malicious users to create such content.
Microsoft unearthed a vulnerability that allowed potential attackers to tamper with specific API parameters, which let them put Taylor Swift’s face over another person. In response to this discovery, Microsoft promptly issued an update designed to thwart the use of invalid parameters within the API.
However, this corrective measure, while essential, falls short of effectively mitigating the mounting deepfake content over the internet. This is because the rapid evolution of artificial intelligence and technological capabilities has made the creation of deepfake content an alarmingly straightforward process. There are too many tools online that let you create deep fakes and it is alarming how easy it has become.
Such manipulated photos and videos are frequently weaponized for spreading disinformation, supporting fraudulent narratives, and orchestrating character assassination attempts. In the instance cited, Taylor Swift became a victim of this troubling misuse of technology.
Fortunately, tech giants are actively taking measures to confront these challenges head-on. Microsoft is addressing the vulnerability, while under the guidance of Elon Musk, X has implemented restrictions on the platform to curtail searches related to Taylor Swift. Many AI tools online now put a watermark over a manipulated image to show that it has been altered with AI. An example of this would be Samsung’s new photo editing tools in its newest Galaxy S24 smartphones.