Nina Welding | November 24, 2019
Until recently, teaching an old dog new tricks was easier than teaching a computer how to learn in the same way a human does.
One approach to creating artificial intelligence (AI) has been to augment a computer’s neural network so it can access and apply already-learned information to new tasks. This approach takes significant time and energy to transfer data from the memory to the processing unit.
Researchers at the University of Notre Dame have demonstrated a novel one-shot learning method that allows computers to draw upon already learned patterns more quickly and efficiently and using less energy than currently possible, while adapting to new tasks and previously unseen data. Their work, recently published in Nature Electronics, was conducted using the ferroelectric field effect transistor (FeFET) technology from GlobalFoundries of Dresden, Germany.
Led by Suman Datta, the Stinson Professor of Nanotechnology and Director of the Applications and Systems-driven Center for Energy-Efficient integrated Nano Technologies and the Center for Extremely Energy Efficient Collective Electronics, the interdisciplinary team produced a ferroelectric ternary content addressable memory array prototype, where each memory cell is based on two ferroelectric field-effect transistors, for one- and few-shot learning applications. When compared to more conventional processing platforms, the Notre Dame prototype provides a 60-fold reduction in energy consumption and 2,700-fold improvement in data processing time to access computational memory and apply the prior information.
Read more here.