A CMOS-compatible neuromorphic computing chip could be on the horizon thanks to breakthrough research out of Technische Universität Dresden.
Researchers from Technische Universität Dresden in Germany recently published breakthrough research showcasing a new material design for neuromorphic computing, a technology that could have revolutionary implications for both blockchain and AI.
Using a technique called “reservoir computing,” the team developed a method for pattern recognition that uses a vortex of magnons to perform algorithmic functions near instantaneously.
Not only did they develop and test the new reservoir material, they also demonstrated the potential for neuromorphic computing to work on a standard CMOS chip, something that could upend both blockchain and AI.
Classical computers, such as the ones that power our smartphones, laptops, and the majority of the world's supercomputers, use binary transistors that can either be on or off (expressed as either a “one” or “zero”).
Neuromorphic computers use programmable physical artificial neurons to imitate organic brain activity. Instead of processing binaries, these systems send signals across varying patterns of neurons with the added factor of time.
The reason this is important for the fields of blockchain and AI, specifically, is because neuromorphic computers are fundamentally suited for pattern recognition and machine learning algorithms.
Binary systems use Boolean algebra to compute. For this reason, classical computers remain unchallenged when it comes to crunching numbers. However, when it comes to pattern recognition, especially when the data is noisy or missing information, these systems struggle.
This is why it takes a significant amount of time for classical systems to solve complex cryptography puzzles and why they’re entirely unsuited for situations where incomplete data prevents a math-based solution.
In the finance, artificial intelligence, and transportation sectors, for example, there’s a never-ending influx of real-time data. Classical computers struggle with occluded problems — the challenge of driverless cars, for example, has so far proven difficult to reduce to a series of “true/false” compute problems.
However, neuromorphic computers are purpose-built for dealing with problems that involve a lack of information. In the transportation industry, it’s impossible for a classical computer to predict the flow of traffic because there are too many independent variables. A neuromorphic computer can constantly react to real-time data because they don’t process data points one-at-a-time.
Instead, neuromorphic computers run data through pattern configurations that function somewhat like the human brain. Our brains flash specific patterns in relation to specific neural functions, and both the patterns and the functions can change over time.
The main benefit to neuromorphic computing is that, relative to classical and quantum computing, its level of power consumption is extremely low. This means that neuromorphic computers could significantly reduce the cost in terms of time and energy when it comes to both operating a blockchain and mining new blocks on existing blockchains.
Neuromorphic computers could also provide significant speedup for machine learning systems, especially those that interface with real-world sensors (self-driving cars, robots) or those that process data in real-time (crypto market analysis, transportation hubs).