Researchers have discovered a “spooky” similarity between how human brains and artificial intelligence computers see three-dimensional objects.
The discovery is a significant step towards better understanding how to replicate human vision with AI, said scientists at Johns Hopkins University who made the breakthrough.
Natural and artificial neurons registered nearly identical responses when processing 3D shape fragments, despite the artificial neurons being trained using images on two-dimensional photographs.
The AlexNet AI network unexpectedly responded to the images in the same way as neurons that are found within an area of the human brain called V4, which is the first stage in the brain’s object vision pathway.
“I was surprised to see strong, clear signals for 3D shape as early as V4,” said Ed Connor, a neuroscience professor at the Zanvyl Krieger Mind/Brain Institute at Johns Hopkins University.
“But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels.”
Professor Connor described a “spooky correspondence” between image response patterns in natural and artificial neurons, especially given that one is a product of thousands of years of evolution and lifetime learning, and the other is designed by computer scientists.
“Artificial networks are the most promising current models for understanding the brain,” Professor Connor said.
“Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence.”
A research paper detailing the discovery was published in the scientific journal Current Biology on Thursday.