• Vulnerable U
  • Posts
  • Microsoft Alleges Group Abused Azure OpenAI to Produce Illicit Images

Microsoft Alleges Group Abused Azure OpenAI to Produce Illicit Images

The operation was used to generate AI images from DALL-E using stolen API keys

Microsoft has filed a legal action against 10 unnamed people for using custom-built tools and stolen API keys to circumvent security and content protections in the company’s Azure OpenAI service to run a hacking-as-a-service operation. The operation was used to generate AI images from DALL-E that should have been prevented by Microsoft’s content and technological controls. 

Why It Matters: The operation described in the Microsoft complaint is unusual and uses a variety of interesting tactics and techniques to get around the protections Microsoft has in place on its Azure OpenAI deployment. The operators used a custom client-side tool called “de3u” that allows users to send custom API calls to the Azure API service to generate the DALL-E images. “Defendants’ de3u application communicates with Azure computers using undocumented Microsoft network APIs to send requests designed to mimic legitimate Azure OpenAPI Service API requests. These requests are authenticated using stolen API keys and other authenticating Information,” the complaint says. This is a complex and clever way to circumvent Microsoft’s security measures and shows the lengths to which threat actors will go to achieve their ends. 

Microsoft’s methods for attempting to disrupt this operation are also interesting. The company filed the complaint under the Computer Fraud and Abuse Act (CFAA) and the Racketeer Influenced and Corrupt Organizations (RICO) statute. The CFAA is the main statute used to prosecute computer crimes, but RICO is usually reserved for organized crime cases. In the complaint, Microsoft alleges that the three unnamed operators of what it calls the Azure Abuse Enterprise have run it “as a continuing unit for the common purpose of achieving the objectives of the Enterprise, including the common objectives of wire fraud and access device fraud.”

Key Details

  • The operation has been ongoing since at least July 2024, which is when Microsoft researchers discovered suspicious activity involving some stolen API keys. “In late July 2024, Microsoft discovered use of customer API Keys to generate prohibited content. Investigation revealed that the API Keys had been stolen. The precise manner in which Defendants obtained all of the API Keys used to carry out the misconduct described in this Complaint is unknown, but it appears that Defendants have engaged in a pattern of systematic API Key theft that enabled them to steal Microsoft API Keys from multiple Microsoft customers,” the complaint says. 

  • The complaint alleges that the operators used the stolen API keys in combination with their de3u tool and oai reverse proxy to offer access to the Azure OpenAI API as a service to third parties. “Using stolen Microsoft API Keys that belonged to U.S.-based Microsoft customers. Defendants created a hacking-as-a-service scheme—accessible via infrastructure like the “rentry.org/de3u” and “aitism.net” domains—specifically designed to abuse Microsoft’s Azure infrastructure and software,” the complaint says.

  • The operators were careful to design their tooling to detect and log any responses from the Azure OpenAI service to learn what prompts will trigger a content filter or other unfavorable outcomes for them. “If the de3u user’s prompt resulted in generation of an image by the Azure OpenAI service, then the oai reverse proxy tool receives image parameters from the Azure OpenAI service including the URL of the generated image, and the prompt used to generate the image. If no image was generated, the oai reverse proxy tool receives and logs the results of any content filtering,” the complaint says.

What’s Next: Microsoft is asking the court for permanent injunctions against the unidentified defendants and to isolate and lock down the domains and infrastructure they used in the operation.