"With ChatGPT making headlines everywhere, it feels like the world has entered a Black Mirror episode. While some argue artificial intelligence will be the ultimate solution to our biggest cybersecurity issues, others say it will introduce a whole slew of new challenges.
I'm on the side of the latter. While I recognize that ChatGPT is an amazing piece of technology, it is also an enabler for hackers, commoditizing nation-state capabilities for the benefit of the "script kiddies" — aka unsophisticated hackers. In addition to writing text, the technology opens up a scary scenario where a computer can be guided to look for information within images that humans can't immediately pick up but machines are sensitive enough to see. Examples would be reflections of passwords on glass, or people who appear in photos that would not appear in them without the help of AI.
As ChatGPT adoption grows, I believe the industry needs to proceed with caution, and here's why. There are three types of capabilities hackers can use ChatGPT for: mass phishing, reverse engineering, and smart malware. Let's take a look at each one of these in detail."
Because ChatGPT is so powerful, it can reduce the amount of time it takes to create handcrafted, personalized emails to a list of people from a few days to just minutes. And with just the click of a button, ChatGPT can answer very specific questions and use its knowledge to impersonate both security and non-security personnel experts. Because ChatGPT can also translate text into any style of writing or proofread at a very high level, once a list of employees and their details are attained, it's easy to mass create emails where a hacker is pretending to be someone else to increase the chances of a successful attack.
ChatGPT is amazing at understanding code, or even machine code. By providing either binary code or obfuscated code of a system, ChatGPT can explain how the code works and what it does in a way that makes it easy for hackers to manipulate the piece of software and enable them to gain access to the company's servers.
Reverse engineering used to be a very rare and highly lucrative skill; historically, only nation-states could incorporate it into their operations. This is now something that can be done by the most basic hackers.
ChatGPT can function as a mini-brain for malware, making it completely autonomous. Nowadays, sophisticated malware allows the hacker to tunnel through a company's servers to observe the hacked network and servers, and send commands on how to extract information. This operation usually involves a hacker connecting to malware he or she has managed to install somewhere within the hacked company's network or servers.