Are Gives Drones Abilities We’ve Only Dreamed Abouttificial Intelligence

7|12|17

George Matus was still in high school when he began raising millions for his startup, Teal. The former quad drone racer’s pitch to investors was a wish list of what he thought a drone should be. More than just an aerial camera, his quad would be freaky fast and easy to use — even fly in the rain.

And, most challenging of all, Teal would think and learn. It would be a platform that developers might use for all kinds of complex applications, from counting a farmer’s cows to following a target without using GPS.

To do all that, Teal would need a tiny supercomputer…and a digital brain.

That would have been impossible just a couple years ago. But a handful of new technologies — sprung from research labs, small startups, and major tech companies — have converged to make this kind of innovation possible. It’s paving the way for quadcopters and self-driving cars that can navigate by themselves. They can recognize what they’re seeing and make independent decisions accordingly, freeing them from the old need for an internet connection.

Breakthroughs in artificial intelligence (AI) lie at the root of this advancement. AI, the scientific shorthand for a machine’s ability to copy human traits like thinking and learning, has transformed how we use technology. AI now permeates our life through Apple’s Siri, Google search, and Facebook newsfeeds.

But that tech taps into the cloud. Ask Siri for help splitting the dinner tab, and your voice is sent off to Apple servers for some speedy calculations. It doesn’t work without the web, or often even with it.

“Robots and UAVs can’t depend on that connection back to the data center,” says Jesse Clayton, Nvidia’s senior manager of product for intelligent machines. Imagine the delay if your quadcopter’s live feed had to bounce off the cloud before a computer could calculate the safest route. You’d be better off flying manual.

That bottleneck has companies racing to build tiny, AI-capable supercomputers.

If I Only Had a Brain

When Max Versace started working on AI algorithms 25 years ago, computers weren’t advanced enough to achieve his vision of an artificial brain. But by 2006, he and a colleague had cooked up a method for computing AI algorithms much faster. They patented it and formed a company, Neurala, around their equations. Then DARPA, the U.S. government’s secretive military research agency, asked Neurala to build a software system that could emulate a fully functional brain.

The physical part of that brain is made from computer processors built by Hewlett Packard and IBM. Neurala wrote the software. “In a sense, we build minds, which are algorithms,” Versace says.

Neurala took its inspiration from a rat brain. With just half a gram of gray matter, a rodent can navigate obstacles, forage food, and evade predators using complex and efficient senses. Yet its brain is far simpler to model than a human brain.

Then, once Neurala built DARPA this fake brain, or neural network, NASA asked them to make it work in a Mars rover. It can take half an hour to bounce signals off the Red Planet and hear back, which makes it somewhat tough to steer a robot. NASA wanted the rover to be able to make more decisions on its own. Neurala’s brain never flew to Mars, but that request from NASA pushed the company to start working on autonomous robots with artificial brains. And that brain will soon power Teal’s drones.

Three major advances have made the fusion of drones and AI possible. In recent years, researchers have amassed staggering amounts of data — mainly, vast image sets. This data is the proving ground for training new and complicated AI algorithms, the second major advancement. This progress in AI allows self-driving cars to recognize and track obstacles on the road. But that skill doesn’t matter much if you can’t free it from a supercomputer. So the third major advancement had to come from new computer processors.

“We are really at the invention of the wheel in terms of AI,” Versace says. “This is just the beginning.”

Versace adds that many current AI algorithms are trained on a supercomputer and then immediately stop learning. He compares it to graduating college at 25 years old and never getting any smarter.

“You go to work every day, perform your duties, wake up tomorrow and you don’t know anything new,” he says. “You just know what you learned the last day of school.”

But he believes AI shouldn’t stop learning.

“We have come up with a different solution, which relies on how the brain works, how the cerebral cortex works,” he adds. “It enables machines to learn a little bit every day, every time they’re used.”

Versace and other scientists are now working on what’s called deep learning: You show a computer thousands of pictures of pedestrians, and eventually it will spot a little old lady in a crosswalk that it’s never seen before. Today, that kind of processing usually happens in the cloud.

Source: blogs.discovermagazine.com