The success of OpenAI’s text-based artificial intelligence (AI) platform, ChatGPT v4, has rekindled interest in its underlying technology. The capability of the language model to find vulnerabilities in Ethereum smart contracts is one of its more intriguing characteristics.
Conor Grogan, a former director of Coinbase, tried this and reported on it on Wednesday. He shared his experience with the GPT-4 on the social media site Twitter. According to him, the ChatGPT v4 immediately exposed several security flaws and described how the code might be attacked after being placed into a live Ethereum smart contract.
I dumped a live Ethereum contract into GPT-4.— Conor (@jconorgrogan) March 14, 2023
In an instant, it highlighted a number of security vulnerabilities and pointed out surface areas where the contract could be exploited. It then verified a specific way I could exploit the contract pic.twitter.com/its5puakUW
The former director of cryptocurrency trading firm Coinbase, specifically deposited a live Ethereum contract in the most recent iteration of the well-known ChatGPT v4, exposing a number of security flaws and weaknesses. He stated on March 14 that it was possible to mine the smart contract there.
Additionally, Grogan shared screenshots of the AI bot analysis, which do seem to demonstrate that ChatGPT v4 can accurately identify crucial problems and vulnerabilities. Grogan concluded that the analyzed smart contract by mentioning, “Should not be used, as it contains crucial vulnerabilities and is built on an illegal scheme.”
The Smart Contract Audit Via ChatGPT: How It Missed a Few Holes?
The capacity of the AI tool to offer “useful assistance to the Web3 security community” was demonstrated by the fact that it was successful in bringing up “many concerns that appeared valid on the surface.” The research did, however, also identify “quite a lot of potential for improvement.”
In particular, ChatGPT failed to find a number of critical security weaknesses, including holes in the logic peculiar to the project, incorrect mathematical calculations and statistical models, and discrepancies between the implementation and the design purpose.
Due to its limits in “completely comprehending the complexity and nuances of code, in addition to its lack of hands-on experience in real-world circumstances,” AI still seems to have a long way to go before it can be trusted as the only auditor of smart contract code. When everything is taken into account, AI is still a ways away from being the only auditor of smart contract code.
These are the grounds for adding human audits by knowledgeable security specialists to ChatGPT’s study to ensure correctness, as the blockchain security platform stressed. The platform continued by highlighting ChatGPT’s many advantages and disadvantages in contrast to expert human auditors on a range of metrics.
Will ChatGPT v4 Replace Auditors with Audit Smart Contracts?
ChatGPT v4 or any other AI language model cannot be used to audit smart contracts since doing so requires a comprehensive understanding of blockchain technology, computer languages, and security standards. In addition, auditors must be able to spot possible pitfalls, weaknesses, and bugs in the code, which calls for both technical and analytical talents.
However, auditing smart contracts involves more than just looking through the code; it also entails comprehending the underlying business logic and making sure the contract complies with the intended use and applicable laws. Auditors must be able to evaluate the impact of the smart contract on various stakeholders and have a comprehensive understanding of the entire ecosystem.
While ChatGPT v4 and other AI language models can help auditors by processing and analyzing massive volumes of data, they are unable to take the role of auditors in the auditing of smart contracts. Only human auditors are capable of providing the necessary technical proficiency, domain-specific knowledge, and analytical abilities for the auditing of smart contracts.
The necessity for continued diligence in assuring the security and integrity of blockchain systems is highlighted by ChatGPT v4’s finding of security vulnerabilities in Ethereum smart contracts. Despite the fact that ChatGPT and other AI language models can help in spotting potential risks and weaknesses, it is ultimately the job of developers and auditors to make sure that smart contracts are created and put into use in a trustworthy and safe manner.
ChatGPT’s capabilities are still significant and may be used for smart contract audits in addition to various other applications, regardless of whether it was able to independently discover smart contract flaws or simply regurgitated information that was already available online.