In a groundbreaking development, Ritual, a decentralized artificial intelligence (AI) network, has emerged from stealth mode with a resounding announcement of a $25 million Series A financing round led by Archetype. Ritual offers an AI-powered infrastructure designed to execute complex logic that is currently infeasible for smart contracts. The company’s mission is to address the challenges facing AI adoption across various business verticals, including high compute costs, limited hardware access, and centralized APIs.
Empowering AI adoption in the crypto space
Ritual’s vision is nothing short of ambitious: to become the focal point of AI in the web3 space by evolving Infernet into a modular suite of execution layers that seamlessly interoperate with other base layer infrastructure within the ecosystem. This interoperability will enable every protocol and application on any blockchain to harness Ritual as an AI Coprocessor. By introducing AI models into the crypto landscape, from base layer infrastructure to applications, new use cases can be explored. For instance, Ritual can play a pivotal role in automatically managing risk parameters for lending protocols based on real-time market conditions.
The modular execution layers of Ritual
Ritual’s protocol diagram reveals a sophisticated approach to integrating AI into the blockchain ecosystem. The GMP layer, composed of layer 1, rollups, and sovereign components, facilitates seamless interoperability between existing blockchains and Ritual Superchain. This Superchain operates as an AI coprocessor, serving all blockchains within the network.
The $25 million Series A funding round was met with enthusiasm from prominent investors, including Balaji Srinivasan, Accomplice, Robot Ventures, Accel, Dialectic, Anagram, Avra, and Hypersphere. The infusion of capital will primarily be used to expand Ritual’s developer network and initiate the seeding of the network.
While the unveiling of Ritual’s AI infrastructure is a significant step forward, the recent executive order on AI safety issued by the Biden administration has raised concerns within the AI community regarding potential hindrances to innovation.
The order introduces six new standards for AI safety and security, encompassing mandates such as the sharing of safety test results with officials for companies developing foundation models posing risks to national security, economic security, or public health and safety. Additionally, the order emphasizes the acceleration of privacy-preserving techniques in AI development.