Secured integration to the future

Secured integration to the future

5 ways to improve cybersecurity with ChatGPT and LLM

Ключові висновки зі звіту про стан Zero Trust від Fortinet

27.06.2023
5 ways to improve cybersecurity with ChatGPT and LLM
While artificial intelligence (AI) chatbots and large language models (LLMs) can be a double-edged sword in corporate risk, they can significantly advance cybersecurity initiatives in ways that can prove unexpectedly beneficial. ChatGPT and large language models have become popular due to their unlimited possibilities in technology, efficiency, and productivity across industries.
While there are risks involved in implementing ChatGPT or LLM into your corporate ecosystem, these tools also increase cybersecurity professionals' efficiency, productivity, and job satisfaction. As a cybersecurity professional, if you understand a new technology well, you can use it well. In this article, we look at how you can use ChatGPT and LLM to improve your cybersecurity.
5 ways to improve cybersecurity with ChatGPT and LLM:
1. Vulnerability scanning and filtering.Several experts and groups, from global CISOs to the Cloud Security Alliance, have argued that AI models can significantly improve cybersecurity vulnerability scanning and filtering. A recent Cloud Security Alliance (CSA) report demonstrated that OpenAI's Codex API can effectively scan for vulnerabilities in programming languages such as C, C#, Java, and JavaScript. "We can foresee LLMs, like those in the Codex family, becoming a standard component of future vulnerability scanners," the researchers said. For example, a scanner could detect and flag insecure code patterns in different languages, allowing developers to address key security issues before they become critical security risks. In terms of filtering, AI models can provide information about threat identifiers that might otherwise go unnoticed by security personnel.
2. Cancellation of add-ons and API PE files. Artificial intelligence and large language models can be used to develop rules and remove popular add-ons. It would be based on reverse engineering frameworks such as IDA and Ghidra. "If you ask exactly what you need and compare it to MITRE ATT&CK tactics, you can take the result offline and make it better, use it as a defense," says Matt Fulmer.LLMs can also analyze PE API files and inform cybersecurity professionals about what they can be used for. In turn, this can limit the time that security researchers spend reviewing PE files and analyzing their APIs.
3. Threat search queries.According to the Cloud Security Alliance, cybersecurity professionals can increase efficiency and speed up response times using ChatGPT and other large language models to develop threat intelligence queries. By generating questions for malware research and detection tools such as YARA, ChatGPT enables potential threats to be quickly identified and mitigated. As a result, employees can spend more time on higher-priority cybersecurity tasks. The above capability is helpful when it comes to maintaining robust cybersecurity in an ever-changing threat environment. The rules can be tailored to meet specific organizational needs and industry-wide threats.
4. Detecting AI-generated text in attacks.Everyone knows that large language models can generate text, but did you know that they will soon be able to detect and watermark AI-generated text? This capability will likely be included in future email security software. The ability to identify AI-generated text means that teams will have an easier time detecting phishing emails, polymorphic code, and other warnings.
5. Generate and transmit the security code.In some cases, large language models such as ChatGPT can be used to both generate and transmit cybersecurity code. Take a moment to consider this example: a phishing campaign can successfully target multiple employees in a company, potentially leading to compromised credentials. While your cybersecurity team may know who opened the phishing email, it may need to be clarified if malicious code was run to steal credentials. To investigate, you can use the Microsoft 365 Defender Advanced Hunting query to identify the 10 most recent logon events of email recipients after opening malicious emails.
The query helps flag suspicious login activity associated with compromised credentials. In this case, ChatGPT can provide a Microsoft 365 Defender search query to verify login attempts to compromised email accounts, helping to block attackers from the system and providing information on whether users need to change passwords. ChatGPT can help reduce the time to action when responding to cyber incidents.
Alternatively, cybersecurity professionals may face the same problem and find a Microsoft 365 Defender search query based on the same example but realize that the back-end system does not work with the KQL programming language. Instead of searching for the suitable model in the correct language, ChatGPT can help with the programming language style translation.
The original text.