• btc = $67 413.00 2 548.82 (3.93 %)

  • eth = $3 246.94 82.02 (2.59 %)

  • ton = $6.77 0.21 (3.27 %)

  • btc = $67 413.00 2 548.82 (3.93 %)

  • eth = $3 246.94 82.02 (2.59 %)

  • ton = $6.77 0.21 (3.27 %)

27 Oct, 2023
1 min time to read

Google is ramping up its efforts to secure generative AI systems with an expansion of its Vulnerability Rewards Program (VRP).

In response to growing concerns about the potential risks posed by AI, the company has released updated guidelines for the program, outlining which discoveries are eligible for rewards and which fall outside its scope.

One of the key focus areas is identifying vulnerabilities related to generative AI attacks and potential malicious uses. For instance, discovering flaws in AI models that could lead to the extraction of private, sensitive information is considered within scope and eligible for a reward. However, if the problem discovered only applies to publicly available, non-sensitive data, it may not qualify for a reward.

This initiative comes as part of Google's commitment to addressing the unique security challenges posed by artificial intelligence. Unlike traditional technologies, AI systems may face security threats related to model manipulation and unfair bias, requiring special guidelines and incentives for security researchers.

In the past year, Google awarded a total of $12 million to security researchers for their contributions in identifying and mitigating vulnerabilities in various technologies.