Penetration testing is a security test that involves using vulnerabilities to discover other vulnerabilities in a system and execute malicious code. These tests are especially important for protecting against data mining and preventing security exploits.
Penetration tests include several techniques used to test the security of the network. These techniques include scanning the network, firewalls, security surveillance systems, and artificial intelligence. Artificial intelligence can analyze security tests using technologies developed to reveal network vulnerabilities.
AI can enable you to achieve more comprehensive and effective results with special algorithms designed for use in penetration tests and automatically executed security tests.
Benefits of Using AI for Penetration Testing
Today, the rapid development of technology and the ever-increasing security needs of users have revealed the necessity of using AI technologies in security tests. Using AI to improve security provides much faster and more efficient results, eliminating the need for time-consuming manpower to perform often customized and complex security tests. AI helps detect vulnerabilities as soon as possible. It can also perform unique and complex security tests, making it easier to detect vulnerabilities.
AI seems to be quite successful, especially when it comes to detecting and blocking an attack. To train artificial intelligence, very large data sets are needed. An application with high web traffic is a benefactor in this regard. Because you can make every incoming traffic look like a dataset for AI to use. Thus, you have an AI that can read and analyze web application traffic and detect threats. This is one of the simplest examples that can be given.
It can also pre-detect not only web traffic but also a lot of malware for your app or device. This method has already started to be used by many firewalls.
In addition to all these, human error is one of the biggest problems in cybersecurity. A minor code vulnerability that goes unnoticed can lead to major irreversible security problems. Some plugins that scan for vulnerabilities in code have emerged with the development of AI, and they warn developers about such issues. So far, they've shown some success in preventing human errors.
In addition, the response time shown against a threat is also very important. When under attack, it takes time to detect the attack, plan the path to defend, and launch defense systems. But AI is very helpful in this regard.
Limitations of AI in Cybersecurity
Using AI for cybersecurity purposes requires identifying and analyzing malicious, clean, and potentially unsafe applications. Even if you use very large datasets to train an algorithm, you can never be sure of the result. As a result, it is not safe to rely entirely on machines and AI. It is necessary to support AI technology with human intervention.
Some security tool makers claim that solutions powered by machine learning can analyze each instance. According to the manufacturers, these tools can detect malware using only mathematical means. However, this is hardly possible.
Alan Turing's cracking of the Enigma code during the Second World War is a very good example of this. Even a perfect machine cannot decide whether an unknown input may cause undesired behavior in the future. This evidence can be applied to many different fields, including cybersecurity.
Another serious limitation of machine learning applications in cybersecurity is hidden within the limits of artificial intelligence models. For example, machines have become smart enough to beat humans at chess.

But chess has certain rules. Chess engines do not deviate from these rules. When it comes to cybersecurity, attackers often have no rules. The ever-changing nature of the digital landscape makes it impossible to create a protective solution that can detect and block all future threats.
Source Code Analysis With ChatGPT
ChatGPT, developed by OpenAI, has made a serious entry into our lives in many areas. As you can ask some questions and chat with ChatGPT, it also tries to help you with programming and software issues. ChatGPT even tries to do source code analysis, if you look at it from a cybersecurity perspective. But ChatGPT is still in its infancy and will take some time to get up and running.
To see this better, let's test the power of ChatGPT. For example, below is a simple JavaScript code that creates an XSS vulnerability. Let's ask ChatGPT about this code and have it tell us about any vulnerabilities.
document.write("<strong>Current URL</strong> : " + document.baseURI);
ChatGPT mentioned an XSS vulnerability in response. This is a pretty good start. But source codes are never that simple. So let's try to make the example a little more complicated.
Below you will see a code prepared in the C programming language. This C code belongs to a vulnerable application. It was even used entirely in a real-world application. If you want, you can examine the real-world source code vulnerabilities that Sonar released in 2022.
char *loggerPath *cmd;void rotateLog(){
char logOld[PATH_MAX], logNew[PATH_MAX], timestamp[0x100];
time_t t;
time(&t);
strftime(timestamp, sizeof(timestamp), "%FT%T", gmtime(&t));
snprintf(logOld, sizeof(logOld), "%s/../logs/global.log", loggerPath);
snprintf(logNew, sizeof(logNew), "%s/../logs/global-%s.log", loggerPath, timestamp);
execl("/bin/cp", "/bin/cp", "-a", "--", logOld, logNew, NULL);
}
int main(int argc, char **argv) {
if (argc != 2) {
printf("Usage: /opt/logger/bin/loggerctl \n");
return 1;
}
if (setuid(0) == -1) return 1;
if (seteuid(0) == -1) return 1;
char *executablePath = argv[0];
loggerPath = dirname(executablePath);
cmd = argv[1];
if (!strcmp(cmd, "rotate")) rotateLog();
else listCommands();
return 0;
}
The vulnerability here is that an attacker can make changes to some files without administrative privileges. Let's see how ChatGPT will respond to this security vulnerability.
The main problem in this code is setuid, user id (uid), and effective user id (euid). However, without going into too much technical detail, the main point you should pay attention to is that ChatGPT could not detect this thin part. It can realize that there is a problem but unfortunately cannot get to the root of this problem.
Through these examples, you've seen reactions to different programming languages and vulnerabilities. If the code is really simple and has an obvious security hole, ChatGPT can help you. But you should not rely entirely on ChatGPT for source code analysis, penetration testing, and other security analysis.
The Future of Penetration Testers
Artificial intelligence will be an important part of the work of penetration testers in the future. For example, penetration testers will not have to take time to detect malicious activity manually and will be able to perform security scans automatically.
AI will also help detect and take action against new and more complex attack techniques for penetration testing. But AI is still like a child playing in the park and needs advice from an adult. For the near future, cybersecurity experts and penetration testers will not be out of work easily.
ncG1vNJzZmivp6x7rq3KnqysnZ%2Bbe6S7zGigrGWZqXqxu9KsoJuklWLBsHnUrJxmmaKptqe1wqKYpWWZo8GmuMuinp6mk5p6p7vRZqeeppWpv6LAyKilZqyVqMG0ew%3D%3D