While there has been a lot of talk about how OpenAI ChatGPT could be abused for malicious purposes and how it can pose a threat, the AI chatbot can also be very useful for the cybersecurity industry.
Launched in November 2022, ChatGPT has been described by many as revolutionary. It is built on top of OpenAI’s GPT-3 family of large language models, and users interact with it through prompts.
There have been numerous articles describing how the capabilities of ChatGPT can be used for malicious purposes, including writing credible phishing emails and creating malware.
However, ChatGPT can also bring many benefits to defenders, and the cybersecurity industry has increasingly integrated it into products and services. Also, some industry members have been testing its capabilities and limitations.
In recent months, several cybersecurity companies have revealed that they have started or plan to use ChatGPT, and some researchers have found practical use cases for the chatbot.
Cloud security company Orca was one of the first to announce the integration of ChatGPT, specifically GPT-3, into its platform. The goal is to improve the remediation steps provided to customers for cloud security risk.
“By tuning these powerful language models with our own security data sets, we have been able to improve the detail and accuracy of our remediation steps, giving you a much better remediation plan and helping you to optimally resolve the issue faster. possible. Orca explained.
Kubernetes security company Armo has integrated ChatGPT’s generative AI into its platform to make it easy for users to create security policies based on Open Policy Agent (OPA).
“Armo Custom Controls pre-trains ChatGPT with security and compliance Regos and additional context, using and leveraging the power of AI to produce custom controls requested through natural language. The user gets the complete OPA rule produced by ChatGPT, as well as a natural language description of the rule and a suggested solution to fix the failed control, quickly, simply, and without the need to learn a new language,” the company said.
Logpoint recently announced a ChatGPT integration for its LogPoint SOAR (security orchestration, automation and response) solution in a lab environment.
“The new ChatGPT integration for Logpoint SOAR allows customers to investigate the potential of using SOAR playbooks with ChatGPT in cybersecurity,” the company said.
AlertEnterprise, the cyber and physical security convergence software company, has launched a chatbot powered by ChatGPT. It enables users to quickly obtain information on physical access, identity access management, visitor management, door reader analytics, and safety and security reports. Users can ask the chatbot questions like “how many new employee badges did we issue last month?” or “show me the upcoming employee training deadlines for access to restricted areas.”
Accenture Security has been analyzing the capabilities of ChatGPT to automate some tasks related to cyber defense.
Cybersecurity companies like Coro and Trellix are also currently exploring incorporating ChatGPT into their offerings.
Some members of the cybersecurity community have shared the results of tests they have conducted using ChatGPT. Training provider HackerSploit, for example, showed how it can be used to identify software vulnerabilities and how it can be leveraged for penetration testing.
A Kaspersky researcher conducted some indicator of compromise (IoC) detection experiments and found promising results in some areas. Testing included verification of systems for IoC, comparison of signature-based rule sets with ChatGPT output to identify gaps, detect code obfuscation, and find similarities between malware binaries.
The Any.Run online malware sandbox used ChatGPT to analyze the malware. The chatbot was able to parse simple samples, but failed when asked to look at more complex code.
NCC Group conducted a security code review using AI and found that it “doesn’t really work.” The company found that while it can correctly identify some vulnerabilities, it also provides false information and false positives in many cases, making it unreliable for code analysis.
Security researchers Antonio Formato and Zubair Rahim have described how they integrated ChatGPT with Microsoft Sentinel threat intelligence and security analytics solution for incident management.
Juan Andrés Guerrero-Saade, a security researcher and Johns Hopkins SAIS Adjunct Professor, recently integrated ChatGPT into a class on malware analysis and reverse engineering. ChatGPT helped students quickly get answers to “dumb questions” thus avoiding class disruption. It also made it easier for them to understand the tools they were using, interpret the code, and even write scripts.
Related: Malicious ad engineering with ChatGPT
Related: Artificial intelligence bias: can AI be trusted?
Related: Ethical AI, possibility or impossible dream?