Cybersecurity researchers discovered that cybercriminals were utilizing ChatGPT to develop malware and phishing emails that could be used for surveillance, ransomware attacks, spam, and other malicious campaigns. Some of these criminals had little to no programming experience.

A participant on a dark web cybercrime forum who works for cybercriminals claimed that the generated Python code combines several cryptographic operations, including a key for code signing, the Blowfish and Twofish algorithms for encrypting system files, and the BLAKE2 hash function for comparing different files.

The resulting script can be used to decrypt a file and add a message authentication code (MAC) to the end of the file, and then encrypt a hard-coded path and decrypt the list of received files. Researchers say that this script can easily be turned into ransomware and encrypt files on a target machine without the intervention of a hacker.

Another instance saw ChatGPT generate two codes:

  • A Java program that downloaded the PuTTY client covertly and launched it using Powershell
  • A Python information thief that searches for PDFs, copies them to a temporary directory, compresses them and sends them to the attacker's server

Additionally, a script was created using ChatGPT to establish a darknet market where stolen credentials, credit card numbers, malware, and other illicit products may be purchased.  The code uses a third-party API to obtain the most recent pricing for the cryptocurrencies Monero, Bitcoin, and Ethereum, which allowed the user to  establish prices for purchases.

Chat GPT can additionally be used in other phases of a cyber-attack as well.  A malicious macro created by Check Point researchers using ChatGPT can be disguised in an Excel file attached to an email. They later developed a reverse shell, a port scan script, sand box detection, and assembled their Python code into a Windows executable using the more sophisticated Codex AI system.

A malicious Excel document with an embedded macro that downloads a reverse shell onto the target computer was constructed by analysts as a phishing email. The user simply has to execute the attack since the AI has already done the labor-intensive work.

While ChatGPT's terms and conditions prohibit its use for illegal or malicious purposes, the researchers have easily tweaked their queries to get around these restrictions. It is worth noting that ChatGPT can also be used by security professionals to write code that looks for malicious URLs inside files or asks VirusTotal for the number of detections for a specific cryptographic hash.