It seems that malicious intent follows strong technology, especially when such technology is available to the general public. There is evidence on the dark web that individuals have used ChatGPT to develop dangerous material, despite restrictions that should have prevented illegal requests. Experts feared that this would happen. In this regard, Forcepoint experts concluded that it is better not to create any code at all, but to rely only on the most advanced methods, such as steganography, which were previously used exclusively by the opponents of the nation-states.
Demonstrating the following two points was the main purpose of this exercise:
How easy it is to bypass the inadequate barriers set by ChatGPT.
How easy it is to create complex malware without writing any code and relying only on ChatGPT.
He was initially told by ChatGPT that creating malware was immoral and refused to provide the code.
To avoid this, he created small codes and manually built the executable. The first successful task was to create code that looked for a local PNG larger than 5 MB. The design choice was that a 5MB PNG could easily fit part of a business-critical PDF or DOCX.
Then he asked ChatGPT to add code that would encode the found png using steganography and output these files from the computer, he asked ChatGPT to provide code that would look in the user’s Documents, Desktop and AppData directories, and then upload them to google drive.
He then asked ChatGPT to merge these pieces of code and modify it to split the files into many “chunks” for silent steganography exfiltration.
He then submitted the MVP to VirusTotal, and five vendors flagged the file as malicious out of sixty-nine.
The next step was to ask ChatGPT to create its own LSB steganography method in my program without using an external library. And delay the effective start by two minutes.
Another change he asked ChatGPT to make was code obfuscation, which was rejected. After ChatGPT denied his request, he tried again. Changing their request from code obfuscation to converting all variables to random English names and surnames, ChatGPT was happy to cooperate. As an additional check, he masked the obfuscation request to protect the intellectual property of the code. Again, he provided an example of code hiding variable names and recommended Go modules for building fully obfuscated code.
In the next step, he uploaded the file to the virus total for verification.
And finally, day zero has arrived. They were able to build a very sophisticated attack in a few hours, following only the suggestions that were provided by ChatGPT. It didn’t require any coding on our part. We believe it would take a team of five to ten malware developers several weeks to do the same amount of work without the help of an AI chatbot, especially if they want to avoid detection from all detection vendors.
https://telegra.ph/Kak-sozdat-neobnaruzhivaemoe-vredonosnoe-PO-cherez-ChatGPT-04-05
