computer vision

The Human Element That Powers Today’s Coolest Tech

From aerial drones and cloud computing to augmented reality and virtual assistants, the tech world is awash with new developments that feel like they’ve been taken from the pages of a science fiction novel. In many cases, these revolutionary technologies seem to obviate the need for human beings, letting you enjoy the benefits without […]

From aerial drones and cloud computing to augmented reality and virtual assistants, the tech world is awash with new developments that feel like they’ve been taken from the pages of a science fiction novel. In many cases, these revolutionary technologies seem to obviate the need for human beings, letting you enjoy the benefits without having to lift a finger.

Along with these powerful innovations comes the fear of being replaced by a machine. However, it’s all too easy to forget that such innovations didn’t just come out of thin air. In most cases, humans are still required in order to bring the magic behind these inventions to life, training the algorithms that power these new technologies. Here’s a look at how humans are having an impact on some of the biggest tech trends of today and tomorrow.

Self-Driving Cars

self-driving car
Few inventions capture the promise and wonder of futuristic technologies like self-driving cars. Companies like Uber are already looking for ways to automate their fleet of vehicles, removing the need for a driver entirely, with other tech titans like Google and Tesla also throwing their hat in the ring.

For these cars to drive on their own, they need ‘vision.’ To train them to recognize various objects on the road, they need to be fed information. Humans annotate and/or segment thousands (sometimes Millions) of images of streetscapes to train the computer vision’s algorithm to learn to recognize what is a road versus a sidewalk, or a pedestrian versus a cyclist and how to prioritize them in decision making. [This enhances the safety quotient of these vehicles]

Humans also play a large role in keeping these vehicles on the road. When automotive tech company Delphi made a nine-day cross-country road trip from San Francisco to New York City with a self-driving car, the vehicle was able to navigate on its own 99 percent of the time — but the engineers along for the ride still had to steer occasionally in order to handle anomalies like construction zones and aggressive lane changes.

IBM Watson

IBM’s supercomputer Watson first burst onto the scene in 2011, when it beat “Jeopardy!” champions Ken Jennings and Brad Rutter at their own game. Currently, the technology is used for a variety of commercial applications, from diagnosing illnesses to improving business processes.

Of course, in order to get on “Jeopardy!” in the first place, Watson had to be able to comprehend a variety of texts in English. To do so, Watson relied on developments in natural language processing such as named entity recognition and coreference resolution in order to resolve potential ambiguities. Once Watson understood the question, it searched through a locally-stored database of 200 million pages of information for the correct answer.
Watson owes its game show wins — and its successes in other fields — to an array of countless human workers who train it and help improve it. Human Experts work to improve Watson through curating content, building training datasets for machine learning, In order for Watson to read doctors’ handwriting, for example, human typists had to painstakingly enter thousands and thousands of texts, and then match them with the correct images for Watson to examine. There is constant and ongoing communication between Watson and humans improve accuracy and remain up to date.

Snapchat Filters

Sure, self-driving cars and talking robots are cool — but we don’t use these in everyday life yet. Snapchat “lenses” (popularly known as filters) are one of the app’s defining features and a massive hit on social media, letting you take pictures where you’re wearing a flower crown or swapping faces with a friend.

In order to bring these filters to life, however, the Snapchat app first has to determine where your facial features are located using computer vision so that it can properly impose another image on your head. According to company patents, Snapchat uses an “active shape model” of an average human face that’s been trained by feeding their algorithms thousands of images that have been annotated to identify key facial features. It then applies this model to your face, adjusting it where necessary in the case of deviations.

 

Next time you see a really cool technology innovation, remember the human intelligence that went into building it and maintaining it.

Author


Avatar