Applying the technique, the team created a network that learned to reconstruct half-erased photographs after seeing the full images only a few times. In contrast, a traditional neural network would need to see many more images before it could reconstruct the original. The researchers also created a network that learned to identify handwritten alphabet letters—which are nonuniform, unlike typed ones—after seeing one example.
In another task, neural networks controlled a character moving in a simple maze to find rewards. After one million trials, a network with the new semiadjustable weights could find each reward three times as often per trial as could a network with only fixed weights. The static parts of the semiadjustable weights apparently learned the structure of the maze, whereas the dynamic parts learned how to adapt to new reward locations. “This is really powerful,” says Nikhil Mishra, a computer scientist at the University of California, Berkeley, who was not involved in the research, “because the algorithms can adapt more quickly to new tasks and new situations, just like humans would.”
Thomas Miconi, a computer scientist at the ride-sharing company Uber and the paper's lead author, says his team now plans to tackle more complicated tasks, such as robotic control and speech recognition. In related work, Miconi wants to simulate “neuromodulation,” an instant networkwide adjustment of adaptability that allows humans to sop up information when something novel or important happens.