09:53
12:45
12:27
12:33
11:17
14:12
09:53
12:45
12:27
12:33
11:17
14:12
09:53
12:45
12:27
12:33
11:17
14:12
09:53
12:45
12:27
12:33
11:17
14:12
Google is ramping up its efforts to secure generative AI systems with an expansion of its Vulnerability Rewards Program (VRP).
In response to growing concerns about the potential risks posed by AI, the company has released updated guidelines for the program, outlining which discoveries are eligible for rewards and which fall outside its scope.
One of the key focus areas is identifying vulnerabilities related to generative AI attacks and potential malicious uses. For instance, discovering flaws in AI models that could lead to the extraction of private, sensitive information is considered within scope and eligible for a reward. However, if the problem discovered only applies to publicly available, non-sensitive data, it may not qualify for a reward.
This initiative comes as part of Google's commitment to addressing the unique security challenges posed by artificial intelligence. Unlike traditional technologies, AI systems may face security threats related to model manipulation and unfair bias, requiring special guidelines and incentives for security researchers.
In the past year, Google awarded a total of $12 million to security researchers for their contributions in identifying and mitigating vulnerabilities in various technologies.