OpenAI’s artificial intelligence chatbot has identified security vulnerabilities in an Ethereum contract that was exploited in 2018.

In a series of tweets on Tuesday, Coinbase’s director of product strategy and business operations Conor Grogan shared the results of an experiment with ChatGPT successor GPT-4. The artificial intelligence system appears to have identified critical flaws in a live smart contract, and even pointed out how it could be exploited.

As it happens, the contract in question was actually exploited five years ago through the vulnerabilities that the AI bot highlighted.

In January 2018, an ERC-20 token called Proof of Weak Hands Coin (PoWH) was created and promoted as a “self-sustaining ponzi scheme.” Three days after it went live, it had amassed over $1 million in value because, perhaps, due to the allure of infinite dividends on offer.

This turned out to be short-lived, however, with hackers exploiting a bug in the contract’s codebase days after its launch. A flawed transfer function resulted in the malicious actors making off with 866 ETH, worth around $800,000 at the time. 

GPT-4 instantly identified a number of issues and vulnerabilities with the contract, including the function that led to “integer overflow,” if the input value amount is greater than the total supply. Analysis of the original incident by several blockchain users attributed the bug in the contract to integer overflow.

The chatbot also outlined a specific way by which the contract could be exploited, in line with the way it was seemingly exploited in 2018.

“I believe that AI will ultimately help make smartcontracts safer and easier to build, two of the biggest impediments to mass adoption,” said Grogan.

While the AI bot’s ability to instantly analyze flaws in smart contract infrastructure is undoubtedly impressive, it is worth noting that deep learning models such as this one are typically trained using publicly available data.

As a16z Investment Engineer Sam Ragsdale pointed out, the bot’s explanation was likely based on an existing Medium post written by a developer at the time of the exploit.

“Probably a low bar asking ChatGPT to debug code which has been discussed in depth, publicly, on the internet,” tweeted another user.