Nanomagnetic Computing Could Drastically Cut AI’s Energy Use

As the Internet of Things expands, engineers want to embed AI into everything, but the amount of energy it requires is a challenge for the smallest and most remote devices. A new “Nanomagnetic” computing approach could provide a solution.

While most AI development today is focused on large, complex models running in huge data centers, there is also growing demand for ways to run simpler AI applications on smaller and more power-constrained devices.

For many applications, from wearables to smart industrial sensors to drones, sending data to cloud-based AI systems doesn’t make sense.

This has led to a growing body of research into new hardware and computing approaches that make it possible to run AI on these kinds of systems.

Much of this work has sought to borrow from the brain, which is capable of incredible feats of computing while using the same amount of power as a light bulb.

New research led b.y. scientists from Imperial College London suggests that computing with networks of nanoscale magnets could be a promising alternative.

In a paper published last week in Nature Nanotechnology, the team showed that by applying magnetic fields to an array of tiny magnetic elements, they could train the system to process complex data and provide predictions using a fraction of the power of a normal computer.

The researchers used these properties to implement a form of AI known as reservoir computing.

While these were not practical data-processing tasks, the team was able to show that their device was able to match leading reservoir computing schemes on a series of prediction challenges involving data that varies over time.

The researchers estimate that when scaled up it could be 100,000 times more efficient than conventional computing.

There’s a long way to go before this kind of device could be put to practical use, but the results suggest computers based on magnets could play an important role in embedding AI everywhere.