The Difference Between Human and Computer Vision

Things sure have changed a lot since the 1960s, when engineers aimed to teach computers to see, and the proposals were, according to John Tsotsos, a computer scientist at York University, “clearly motivated by characteristics of human vision.” Now, computers beat us at our own game.

Computer vision has grown from a pie-in-the-sky idea into a sprawling field. Computers can now outperform human beings in some vision tasks, like classifying pictures — dog or wolf? — and detecting anomalies in medical images. And the way artificial “neural networks” process visual data looks increasingly dissimilar from the way humans do.
[...]
… This raises the question: Does computer vision need inspiration from human vision at all?
In some ways, the answer is obviously no. The information that reaches the visual cortex is constrained by anatomy: Relatively few nerves connect the visual cortex with the outside world, which limits the amount of visual data the cortex has to work with. Computers don’t have the same bandwidth concerns, so there’s no reason they need to work with sparse information.

According to Tsotsos, however, disregarding human vision is folly.

Find out more about this over at Quanta Magazine.

(Image Credit: PublicDomainPictures/ Pixabay)


Comments (0)

Login to comment.
Click here to access all of this post's 0 comments
Email This Post to a Friend
"The Difference Between Human and Computer Vision"

Separate multiple emails with a comma. Limit 5.

 

Success! Your email has been sent!

close window
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
 
Learn More