Img

What Is Computer Vision & How Does It Work?

AI technology has come a long way from its origins with Alan Turing, John McCarthy, and Marvin Minksy. And even from grim pop-culture, AI imagings like 2001 Space Odyssey, Blade Runner, The Terminator, and Robocop. Today, AI technology impacts our world in many positive ways. Computer vision is just one amazing development. But what, exactly, is computer vision, how does it work, and how can it improve our lives?

What Is Computer Vision?

Computer vision (CV) is a type of artificial intelligence (AI) technology with boundless applications. It’s designed to mirror human vision, programmed to see things in images and video the way the human eye does. It enables computers to understand, interpret, and translate from the visual world. 

Although human vision is innately complex, computer vision technology has grown by leaps and bounds in recent years, and it can now identify and label objects with greater proficiency than ever before.

A Brief History of Computer Vision

It may surprise some to know that computer vision development first began in the 1950s. Scientists studied the human eye in greater detail and tried applying their new knowledge to computers. Then in the 1960s, universities began developing it as a precursor to artificial intelligence – hoping that the technology could transform the world. 

Actually, the term “artificial intelligence” was coined in 1956 during the Dartmouth Summer Research Project. Professor McCarthy proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Now, around 50 years later, artificial intelligence is part of our daily lives, and computer vision is a multi-billion dollar industry.

But why did it take so many decades to arrive where we are today? Computer technology initially could not match the high expectations of the scientific community or tackle the complex problems that researchers hoped to solve. In particular, computer vision proved one of the most difficult AI challenges.

It wasn’t until 2012 that a team from the University of Toronto finally made a breakthrough and demonstrated it at the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC). They called it AlexNet, a deep neural network that was able to perform image recognition tasks with low error rates, paving the way for even greater improvements and advancements in computer vision.

Other technology advancements have helped the evolution of computer vision: 

  • Mobile technology with built-in cameras has multiplied the number of photos available for machine learning.
  • Computers are more powerful than ever.
  • Hardware designed for computer vision is more accessible than ever before.
  • New algorithms based on convolutional neural networks (CNNs) enable better prediction of output values and improve performance over time as it learns. 

How Does Computer Vision Work & Why Does It Matter?

It’s important to understand that since computer vision mimics human sight, technology developments hinge on our understanding of how the human brain works and how human eyes process images. It’s safe to say that we don’t know everything yet. But as we understand more and more about the intricate complexities of brain functionality, we can create more complex, accurate, efficient, and powerful computer vision systems.

Computer vision algorithms are based on learning from thousands and thousands of images, accumulating data, and analyzing that data so that the technology can use its knowledge and experience to accurately identify future images. Rather than telling the computer what features to look for, it’s given a multitude of examples, so that it can learn to accurately identify images on its own. 

Before machine learning, collecting and uploading this data used to be done manually. Developers coded each rule by hand. It was tedious and cumbersome with a large margin of error. But with the advent of machine learning, these tasks could now be automated. Engineers could create computer vision applications that would detect patterns in images and use statistical algorithms to make classifications.

Deep learning took machine learning even further. It uses sophisticated neural networks – modeled after human vision with artificial networks of neurons – to do all the hard work. Neural networks are individual layers of nodes connected to adjacent layers. Deep neural networks then have deeper or more layers of nodes. With many examples of a specific type of data, it can find commonalities, create mathematical equations, and use those equations to accurately identify future data – without human intervention.

Because they have such a large amount of data being processed and require multiple, complex mathematical calculations, deep learning systems require powerful hardware. But at the same time, it is also faster and easier to develop deep learning applications than previous types of machine learning. With the dawning of the internet, the rise of cloud networking, and the development of super-speedy chips, deep learning has taken computer vision forward by leaps and bounds.

Today, all types of facial recognition systems, medical technologies, and automated car technologies use modern computer vision (CV) applications: 

  • CV helps self-driving cars recognize their surroundings. It helps them read road signs, detect pedestrian or vehicular traffic, and spot obstacles in the road. 
  • CV is critical for facial recognition. It adds a layer of security to smartphones or helps law enforcement or defense agencies recognize suspects in video feeds.
  • It improves health outcomes by analyzing X-ray and MRI images for any signs or symptoms of disease in a less time-consuming way than traditional methods
  • CV technology is applied to augmented reality (AR) and virtual reality (VR). For AR gear, CV helps it embed virtual objects onto real-world images. For VR games, CV provides simultaneous localization and mapping (SLAM), user-body and gaze tracking, and SfM (structure from motion).
  • And, with Lunar Eye, computer vision improves manufacturing and logistics by giving computers the ability to recognize objects, people, and faces and make fast, accurate decisions – faster and more accurately than the human brain can.

One Example of Computer Vision Applications

Let’s say you want to use computer vision to recognize sports cars. There are hundreds of models of cars out there that all have doors, tires, hoods, windows, taillights – you get the idea. And there are also hundreds of thousands of pictures of cars in different settings with different lighting. So, what can you do for better quality assurance?

You supply your computer vision system, your neural network, with hundreds or thousands of examples – not just of what is the right answer, but what is the wrong answer. Training and learning continue until the output is reliable. And today’s technology allows for 99% accuracy vs. just 50% about a decade ago.

The Cloud-Based Lunar Eye Platform Can Help: 

  • Avoid errors and delays in the manufacturing process by detecting irregularities or defects. 
  • Make sorting more efficient and accurate with optical character recognition and color analysis
  • Monitor machines, common areas, production areas, employee areas, and high-risk zones, making intelligent decisions in real-time.
  • Provide top-notch security with facial recognition, license-plate detection, and area monitoring – even searching for a specific person or object.
  • Improve logistics by tracking shipments, monitoring loading areas, assessing efficiencies, and providing predictive modeling.
  • Improve productivity with tools that analyze equipment usage and employee activity, by making forecasts, and helping design better environments and workflows.
  • Plus, much more!

Want to learn more about Lunar Eye? We’ll show you how it works. Contact us to schedule a free demo.