This is so interesting and scary at the same time. What will happen to jobs for actual humans?
Vision-automation technology is taking over the factory floor, a testing ground for adoption in self-driving cars, drones
Robots that see underpin the future of self-driving cars, humanoid robots and autonomous drones. Right now, they’re serving their apprenticeship sizing up sausages.
Food manufacturers are combining advances in laser vision with artificial-intelligence software so that automated arms can carry out more-complex tasks, such as slicing chicken cutlets precisely or inspecting toppings on machine-made pizzas. At a sausage factory, more-powerful cameras and quicker processors enable robots to detect the twisted point between two cylindrical wieners fast enough that they can be cut apart at the rate of 200 a minute.
Being able to see is a major frontier in robotics and automation—crossing it is key to autonomous vehicles that can navigate obstacles, humanoid robots that can more closely integrate with humans and drones that can fly more safely.
Companies world-wide are investing in computer vision-based technology. Chip maker Intel Corp. bought Mobileye NV for $15.3 billion in March 2017, in part for the Israeli company’s vision-based driver-assistance technology. In April, Chinese e-commerce giant Alibaba Group Holding Ltd. led a $600 million funding round in startup SenseTime Group Ltd., which specializes in facial- and image-recognition technology.
The sensing and imaging market will grow about 10-fold to $18.5 billion by 2023, market-research firm Yole Développement forecasts. That is being spurred on by worker shortages, rising labor costs and robots’ performance edge over humans.
A high-speed system may have a process time of 10 to 30 milliseconds, or about 100 times as fast as a human, said Bob Hosler, chief operating officer of the U.S. subsidiary of Osaka, Japan-based Keyence Corp., one of the biggest companies in the vision-products field.
“You can do more with a robot that can see,” said John Keating, a senior director at Natick, Mass.-based Cognex Corp., which makes vision sensors used by global manufacturers, including food processors.
Vision-sensing devices can be used all through the sausage-making process, from measuring to inspecting for defects to quality control on the final product, Mr. Hosler said.
Food manufacturers have been early adopters of new technologies from canning to bread slicers, and vision automation has been used for many years for tasks such as reading bar codes and sorting packaged products. Leaders now are finding the technology valuable because robot eyes outpace the human eye at certain tasks.
For years, Tyson Foods Inc. used sensors to map chicken fillets so they could be cut to the precise specifications required by restaurant customers that need them to cook uniformly. But exposure to the high pressure, high temperature water there kept causing equipment failures.
Now technical improvements, tougher materials and declining prices mean the company can integrate vision technology in facilities including the new $300 million chicken-processing plant in Humboldt, Tenn., said Doug Foreman, who works in technology development at the Springdale, Ark.-based food company. The technology could help optimize the use of each part of the bird, he added.
Tyson is investing in a manufacturing automation center to further explore the application of vision technology in their operations, the company said.
Still, challenges remain in coaching robots to understand what they are seeing.
While vision sensors are good at scanning images for what’s missing, robotic eyes face a wall in inspecting objects from multiple angles, according to engineers at Kyoto, Japan-based Omron Corp. Their proposed solution: big data. To teach a sensor to distinguish a chocolate chip from a burned bit in a cookie, for example, Omrom is using AI to analyze thousands of inspection results. That sort of software will be crucial as robots increasingly permeate the economy.
Advances so far allow vision technology to ensure frozen pizzas have the correct toppings. Other applications include the ultrasonic slicing of cheese, cutting bread rolls with water jets and picking pancakes off a production line.
The difficulties solving technical challenges on food assembly lines shows how hard it could be to develop automated vision for more complex tasks, some where human lives are at stake. For example, a self-driving car needs to see from multiple angles and make split-second calculations to avoid one obstacle without hitting another.
Car makers, historically the biggest user of vision technology, are using it for emergency braking and scanning road signs; logistics companies deploy it to more quickly identify packages, and consumer electronics companies to position liquid-crystal display screens more precisely than is possible with the naked eye.
The next big thing in machine vision is 3-D imaging—technology that can measure depth as well as diameter and gaps. One application is in bin picking: A robot able to sift through a box of items and identify, organize and adjust the contents could eventually step in for human workers in shops and warehouses.
The high cost of such technologies are a barrier to them being implemented for such tasks, said Jairam Nathan, analyst at Daiwa Capital Markets. “3-D vision increases the capability of robots significantly in tasks like bin picking, but the systems are still expensive,” he said.