Etherscan launches AI-powered Code Reader

The tool allows users to retrieve and interpret the source code of a specific contract address via AI prompt.

Buy physical gold and silver online

On June 19, Ethereum block explorer and analytics platform Etherscan launched a new tool, dubbed "Code Reader," that utilizes artificial intelligence to retrieve and interpret the source code of a specific contract address. After user prompt input, Code Reader generates a response via OpenAI's large language model (LLM), providing insight into the contract's source code files. Etherscan developers wrote: 

"To use the tool, you need a valid OpenAI API Key and sufficient OpenAI usage limits. This tool does not store your API keys."

Use cases for Code Reader include gaining deeper insight into contracts' code via AI-generated explanations, obtaining comprehensive lists of smart contract functions related to Ethereum data, and understanding how the underlying contract interacts with decentralized applications (dApps). "Once the contract files are retrieved, you can choose a specific source code file to read through. Additionally, you may modify the source code directly inside the UI before sharing it with the AI," developers wrote.

A demonstration of the Code Reader tool. Source: Etherscan

Amid an AI boom, some experts have cautioned on the feasibility of current AI models. According to a recent report published by Singaporean venture capital firm Foresight Ventures, "computing power resources will be the next big battlefield for the coming decade." That said, despite growing demand for training large AI models in decentralized distributed computing power networks, researchers say current prototypes face significant constraints such as complex data synchronization, network optimization, data privacy and security concerns. 

In one example, Foresight researchers noted that the training of a large model with 175 billion parameters with single-precision floating-point representation would require around 700 gigabytes. However, distributed training requires these parameters to be frequently transmitted and updated between computing nodes. In the case of 100 computing nodes and each node needing to update all parameters at each unit step, the model would require transmitting of 70 terabytes of data per second, far exceeding the capacity of most networks. Researchers summarized:

"In most scenarios, small AI models are still a more feasible choice, and should not be overlooked too early in the tide of FOMO on large models."

About the author

Why invest in physical gold and silver?
文 » A