Researchers trained the DexNet 2.0 deep learning system using a vast library of 3D shapes and suitable grasp positions to match those objects. Using virtual, rather than real objects made it possible to train the AI much more quickly. “We can generate sufficient training data for deep neural networks in a day or so instead of running months of trails on a real robot,” Berkeley postdoctoral researcher Jeff Mahler told MIT Technology Review.
After training the AI system, they connected it to a standard robotic arm outfitted with an off-the-shelf 3D depth sensing camera. When confronted with a new object, the system can quickly figure out the best grasp to match. If it’s more than 50 percent sure it can grab something, it succeeds 98 percent of the time. If its confidence levels are below that, it can poke the object first to figure out a better grasp, and can then successfully grasp it 99 percent of the time — significantly more than any other systems, the team says.
The researchers figure their new training methods, combined with cloud data and processing, could accelerate the use of robots in industry, even in non-traditional settings like hospitals. So, it’s not surprising the study has industry players heavy into robotics behind it, including Toyota, Siemens and Amazon. Amazon actually runs an annual “Warehouse Picking Challenge” (above) to find robots that can best pick items from warehouse shelves to fulfill orders.
The deep learning tech will be great for industry, allowing execs like Jeff Bezos to cut warehouse employees and save money. However, it sucks for the workers who will be out of a job, and could widen the gap between ultra-wealthy titans like Bezos and average folks — showing once again that AI will require not just technological solutions, but political ones, too.