AI already is thinking the way we do

Does AI perform some cognitive tasks like we do because we have made the code or is there just a right way to think some thoughts?
It is enough right now to consider the possibility that deep convolutional neural networks spontaneously segregate object and facial recognition — both highly specialized tasks — just as human brains do.
A team of researchers from the Massachusetts Institute of Technology and Columbia University have found that networks organized themselves, without instruction, to separately recognize faces and objects.
Specifically, object-trained networks performed sub-optimally when tasked with identifying faces, and face networks struggled with objects.
A VGG16 network trained to spot both on 1,715 biometric identities and 423,000 object images was correct almost as often as a specially trained network, according to the research paper.
(An article in MIT News examining the research notes that the brain sets aside specific areas for other tasks, including understanding language, detecting written words and perception of vocal songs.)
As noted, the work raises unusual questions.
A person of faith could ponder the brain rapidly identifying a face in a forest through cognition segregation and see the work of a loving deity.
A rationalist might see an evolutionary advantage to quickly recognizing a friend or foe — and storing the information for future reference.
But why did the research team’s deep neural network separate the functions?
Maybe it is like a rain drop jinking down a pane. Its path is pre-ordained by the demand for efficiency.
Maybe the ghost in the machine just needs us to get out of its computing way.
Article Topics
AI | biometrics | biometrics research | facial recognition | MIT | neural networks | object recognition
Comments