Google plans to use its Gemini AI model in conjunction with its Mandiant cybersecurity unit and Virus threat Intelligence to reverse engineer malware attacks. The company also plans to use its Gemini 1.5 Pro large language model, released in February, to make threat reports easier to read.
The company, in a blog post, claimed the Gemini 1.5 Pro model took just 34 seconds to analyse the code of a virus that was used in a ransomware attack in 2017 that attacked hospitals, companies and other organizations around the world.
Other than reverse engineering malware, Gemini can also be used to summarize threat reports into natural language inside Threat Intelligence allowing companies to assess how potential attacks may impact them.
Google further shared that Threat Intelligence has a vast network of information to monitor potential threats before an attack happens. Additionally, the company plans to use Mandiant, the cybersecurity company that provides human experts for monitoring potentially malicious groups, to assess security vulnerabilities around AI projects.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
While AI models can help reverse engineer malware and prepare threat reports, they themself can fall prey to threat actors. These threats include “data poisoning”, which adds bad code to the data scraped by AI models affecting its ability to respond to specific prompts.
Google’s plan to use AI models in the cybersecurity space is not a new trend. Earlier, Microsoft launched its Copilot for security which uses the GPT-4 model to answer questions about threats.
Published - May 07, 2024 02:26 pm IST