Your address will show here +12 34 56 78
computer vision

From aerial drones and cloud computing to augmented reality and virtual assistants, the tech world is awash with new developments that feel like they’ve been taken from the pages of a science fiction novel. In many cases, these revolutionary technologies seem to obviate the need for human beings, letting you enjoy the benefits without […]

From aerial drones and cloud computing to augmented reality and virtual assistants, the tech world is awash with new developments that feel like they’ve been taken from the pages of a science fiction novel. In many cases, these revolutionary technologies seem to obviate the need for human beings, letting you enjoy the benefits without having to lift a finger.

Along with these powerful innovations comes the fear of being replaced by a machine. However, it’s all too easy to forget that such innovations didn’t just come out of thin air. In most cases, humans are still required in order to bring the magic behind these inventions to life, training the algorithms that power these new technologies. Here’s a look at how humans are having an impact on some of the biggest tech trends of today and tomorrow.

Self-Driving Cars

self-driving car
Few inventions capture the promise and wonder of futuristic technologies like self-driving cars. Companies like Uber are already looking for ways to automate their fleet of vehicles, removing the need for a driver entirely, with other tech titans like Google and Tesla also throwing their hat in the ring.

For these cars to drive on their own, they need ‘vision.’ To train them to recognize various objects on the road, they need to be fed information. Humans annotate and/or segment thousands (sometimes Millions) of images of streetscapes to train the computer vision’s algorithm to learn to recognize what is a road versus a sidewalk, or a pedestrian versus a cyclist and how to prioritize them in decision making. [This enhances the safety quotient of these vehicles]

Humans also play a large role in keeping these vehicles on the road. When automotive tech company Delphi made a nine-day cross-country road trip from San Francisco to New York City with a self-driving car, the vehicle was able to navigate on its own 99 percent of the time — but the engineers along for the ride still had to steer occasionally in order to handle anomalies like construction zones and aggressive lane changes.

IBM Watson

IBM’s supercomputer Watson first burst onto the scene in 2011, when it beat “Jeopardy!” champions Ken Jennings and Brad Rutter at their own game. Currently, the technology is used for a variety of commercial applications, from diagnosing illnesses to improving business processes.

Of course, in order to get on “Jeopardy!” in the first place, Watson had to be able to comprehend a variety of texts in English. To do so, Watson relied on developments in natural language processing such as named entity recognition and coreference resolution in order to resolve potential ambiguities. Once Watson understood the question, it searched through a locally-stored database of 200 million pages of information for the correct answer.
Watson owes its game show wins — and its successes in other fields — to an array of countless human workers who train it and help improve it. Human Experts work to improve Watson through curating content, building training datasets for machine learning, In order for Watson to read doctors’ handwriting, for example, human typists had to painstakingly enter thousands and thousands of texts, and then match them with the correct images for Watson to examine. There is constant and ongoing communication between Watson and humans improve accuracy and remain up to date.

Snapchat Filters

Sure, self-driving cars and talking robots are cool — but we don’t use these in everyday life yet. Snapchat “lenses” (popularly known as filters) are one of the app’s defining features and a massive hit on social media, letting you take pictures where you’re wearing a flower crown or swapping faces with a friend.

In order to bring these filters to life, however, the Snapchat app first has to determine where your facial features are located using computer vision so that it can properly impose another image on your head. According to company patents, Snapchat uses an “active shape model” of an average human face that’s been trained by feeding their algorithms thousands of images that have been annotated to identify key facial features. It then applies this model to your face, adjusting it where necessary in the case of deviations.

 

Next time you see a really cool technology innovation, remember the human intelligence that went into building it and maintaining it.

0

Blog, self-driving cars

The New York Times reported on Sunday that Waymo, the self-driving unit of Google’s parent company Alphabet, and the ride-sharing startup, Lyft are teaming up to bring self-driving technology to the mainstream. “We’re looking forward to working with Lyft to explore new self-driving products that will make our roads safer and transportation more accessible. Lyft’s vision and commitment to improving the way cities move will help Waymo’s self-driving technology reach […]

The New York Times reported on Sunday  that Waymo, the self-driving unit of Google’s parent company Alphabet, and the ride-sharing startup, Lyft are teaming up to bring self-driving technology to the mainstream.

“We’re looking forward to working with Lyft to explore new self-driving products that will make our roads safer and transportation more accessible. Lyft’s vision and commitment to improving the way cities move will help Waymo’s self-driving technology reach more people in more places,” Waymo said in a statement to Recode.

Together, these companies have partnerships with the majority of major auto manufacturers. Lyft announced a partnership earlier this year with General Motors to test the Chevy Bolt with the general public within the next few years. Waymo has deals with Fiat Chrysler and Honda testing their technology on the road.

What does this mean for Uber? They have poured hundreds of millions of dollars into the development of self-driving cars to catch up to Google and view the technology as crucial to their future. Uber has had a rough year to date; this may be a considerable setback for them.

The Latest Self-Driving Technology Updates

Self-driving cars are one of the hottest things in tech right now. It feels like just yesterday we were saying “can you imagine!?” Here we are in 2017 on the cusp of having autonomous cars pick us up at our front door. To get an update on where we are, here are a few more updates on what is going on in the world of autonomous cars.

Training Datasets Released For All to Leverage

car1

Self-driving cars use advanced Artificial Intelligence algorithms to make thousands of decisions. To know what decisions to make, the algorithms are trained using datasets of various scenarios.

Training datasets are usually very expensive to create because it takes a lot of time to annotate the images. Annotating a single image (or a single frame from a video) can take between seconds and hours depending on complexity or, how much you are looking to teach an algorithm.

Luckily for technology startups, according to TechCrunch, Mapillary is releasing a free dataset of 25,000 street-level images from 190 countries, with pixel-level annotations that can be used to train automotive AI systems. Mapillary is a crowdsourcing company that uses computer vision to read images uploaded to their database by people around the world using smartphones to identify locations in 3D and recognizes the order of objects within them.

The release of this dataset opens new opportunity for tech startups to advance machine learning algorithms used in self-driving cars. It’s no surprise that this dataset release was sponsored by big auto manufacturers Lyft, Toyota, and Daimler.

Humans may be what is slowing down self-driving cars

car2

The benefits to self-driving cars are many: safer roads, less traffic, lower fuel consumption, and don’t forget enhanced human productivity – no more lost hours driving, you can now be productive on your commute. With all of these benefits for humans, it turns out that we may be the problem holding the technology back.

Driving takes a certain amount of assertiveness, according to John C. Dvorak, Columnist at PCMag.com, self-driving cars are too polite. In ‘right of way’ situations like 4-way stops, human drivers will assert their intentions to go; the autonomous car may sit until the intersection clear. If a cyclist is hogging the road, it will slow down and drive behind until the path is clear.

John Adams, a professor at University College London, says “Driving in cities would be unacceptably slow if autonomously-operating cars were required to assume that every pedestrian might jump into traffic as fast as humanly possible. But if pedestrians came to learn that cars would always avoid them then they would likely act in much less controlled ways on streets and pavements.”

Will the algorithms become more advanced to handle these situations? Or will humans have to adapt to allow for these polite road warriors?

No more fighting over parking spots

car3

Once a self-driving car has dropped you off, it needs to find a place to park. As a human driver, we all know how difficult and annoying this can be. A hackathon team that came out of the TechCrunch Disrupt NY event, Val.ai created a way for autonomous vehicles to bid for parking spaces in an auction.

The tech-twist here is that these cars aren’t looking for an empty parking spot, they are negotiating with other autonomous cars which are currently parked and will be leaving soon. The model was based on public parking spots which bring up concerns about using public space for private use, a term TechCrunch calls “#JerkTech.” But, there is still lots of opportunity for private parking lots.

There you have it, the latest in self-driving cars. Do you work on self-driving technology? We would love to hear from you to discuss how iMerit’s dataset services may be of use to you. Get in touch!

0