Three university-led teams have been chosen for a program to map, understand and mathematically re-create visual processing in the brain in response to an Intelligence Advanced Research Projects Activity challenge that aims to close gaps between computer and human recognition.
Sandia National Laboratories said Nov. 2 that participants of IARPA’s Machine Intelligence from Cortical Networks project will work to explain how brains see patterns and classify objects then use that information to develop computer algorithms for national security and intelligence applications.
Carnegie Mellon University, Wyss Institute for Biologically Inspired Engineering, Harvard University, Baylor College of Medicine, Allen Institute for Brain Science and Princeton University will utilize different techniques to map the visual cortex and generate models to develop computer algorithms for object recognition.
“We’re building better tools to see things that we were unable to see before and we’re trying to come up with theories to explain what we’ve observed,” said Brad Aimone, computational neuroscientist and principal member of technical staff at Sandia National Laboratories.
“The hope is it will tell us something that will make our models better so we could use them to do interesting things.”
Aimone will also lead a team to evaluate how much neuroscience machine learning algorithms will incorporate through computational neuroscience models as well as perform peer-review panel tasks to compare that university-led teams’ conclusions.