• btc = $70 505.00 - 280.71 (-0.40 %)

  • eth = $3 500.44 -52.91 (-1.49 %)

  • ton = $7.19 -0.21 (-2.82 %)

  • btc = $70 505.00 - 280.71 (-0.40 %)

  • eth = $3 500.44 -52.91 (-1.49 %)

  • ton = $7.19 -0.21 (-2.82 %)

18 Sep, 2023
1 min time to read

Microsoft AI researchers unintentionally exposed vast amounts of sensitive data, including private keys and passwords, when publishing an open-source training data storage bucket on GitHub.

In a recent discovery, cloud security startup Wiz found that Microsoft's AI research division had exposed extensive sensitive data on GitHub. The exposed information, which included personal backups, Microsoft Teams messages, passwords, and more, had been accessible since 2020 due to misconfigurations in the repository.

The incident occurred when Microsoft's AI researchers provided open-source code and AI models for image recognition in a GitHub repository. Users were prompted to download the models from an Azure Storage URL, but the URL had overly broad permissions, granting access to the entire storage account. This resulted in the accidental exposure of 38 terabytes of data, including personal backups from Microsoft employees.

The exposed URL also allowed for "full control" rather than "read-only" permissions, potentially enabling malicious activities. Wiz noted that this issue highlights the importance of implementing robust security measures when handling vast amounts of data, especially in AI development.

Upon discovering the misconfiguration, Wiz promptly informed Microsoft on June 22, and Microsoft revoked the access signature (SAS) token on June 24. Microsoft's Security Response Center confirmed that no customer data was exposed, and no other internal services were compromised.