2025.4.23-25 Western China International Expo City
Chengdu International Industry Fair
汉诺威米兰展览(上海)有限公司
汉诺威米兰展览(上海)有限公司
汉诺威米兰展览(上海)有限公司
汉诺威米兰展览(上海)有限公司
汉诺威米兰展览(上海)有限公司
86-21-20557000/86-21-22068388
汉诺威米兰展览(上海)有限公司
汉诺威米兰展览(上海)有限公司
扫码关注我们,获取更多资讯
Press Service
Press Service
2022 Industrial Robot Technology Trend: Machine Vision + AI + Edge Computing
Sharing:

In 2022, we could see a notable trend in robotics development, which is greater flexibility. In a recent interview, Wendy Tan White, CEO of robotics software company Inside, predicted more innovation in industrial robotics. She believes that we are on the cusp of a renaissance in industrial robotics, driven by software-first solutions, cheaper sensors and richer data.


Manufacturers today are looking to do more with robotics. For those who want smaller, more flexible designs that can easily fit into existing production lines, or their existing robots that can be easily repurposed and reassigned tasks.


In other areas such as logistics, warehouses or laboratories, there will be a growing need for robots to function outside of normal manufacturing spaces. Collaborative robots (cobots), in particular, will continue to offer the possibility of greater cooperation and collaborative work with humans. A well-known example is Amazon's Kiva robot, a robotic pallet truck that follows workers around a warehouse and supports them in their tasks.


In 2022 and beyond, robots will increasingly be used to pick and move products around warehouses or production lines. Other areas of growth will include collaborative robots operating with computer numerical control (CNC) equipment, and welding applications are increasingly likely. So, are robots ready for these different roles?



Visual system


An integral feature of robots performing new tasks, such as picking and moving products in warehouses, is the increased use of 2D and 3D vision systems. "Blind" robots (or robots without vision systems) can only perform simple repetitive tasks, whereas robots with machine vision can react intuitively to their surroundings.


With the 2D system, the robot is equipped with a camera. This method is more suitable for applications where reading color or texture is important, such as barcode detection. On the other hand, 3D systems have evolved from spatial computing, which was first developed at the Massachusetts Institute of Technology (MIT) in 2003. They rely on multiple cameras to create a 3D model of the target object, and are suitable for tasks of any shape or location, such as automatically grasping parts.


Both 2D and 3D vision systems have a lot to offer. 3D systems, in particular, can overcome some of the errors that 2D-equipped robots encounter when performing physical tasks that would otherwise require manual diagnosis and resolution, and may cause malfunctions. Going forward, robots equipped with 3D vision systems will unlock more potential in inspecting defects such as engine parts or product quality, packaging inspection, checking component orientation, and more.



The right choice


In the coming years, the focus of industrial robotics will shift from sensor device hardware to building artificial intelligence (AI) to help optimize sensor use and ultimately improve performance.


The combination of AI, machine vision and machine learning will usher in the next phase of robotics. Expect to see more data management and augmented analytics systems designed to help manufacturers achieve higher levels of operational excellence, resiliency, and cost-effectiveness.


This will include a combination of machine vision and learning capabilities. Take the application of precise out-of-order picking, one of the most sought-after tasks for robots. With previous robotic systems, specialized computer-aided design (CAD) programming was required to ensure the robot could recognize shapes. While these CAD systems can identify any given item in a pick box, the system encounters problems if the items appear in a random order in a pick box sorting task.


Instead, advanced vision systems use passive imaging, where photons are emitted or reflected by objects and then form an image. As a result, the robot can automatically detect items regardless of their shape or order.


One example is Shibaura Machine's vision system TSVision3D, which uses two high-speed cameras to continuously capture 3D images. Using smart software, the system can process these images and identify the exact location of the item. Through this process, the robot can determine the most logical sequence and pick up items with submillimeter accuracy, just as easily as a human worker.


Robotics holds great potential for combining machine vision with robotic learning. Possible applications include vision-based drones, warehouse pick-and-place applications, and robotic sorting or recycling.



Trial and error process


With TSVision3D, we see robotic AI evolve to the point where it can interpret images as reliably as humans. Another key feature of this evolution is machine learning, which allows robots to learn from mistakes and adapt.


An example is the DACTYL robotic system created by OpenAI, an artificial intelligence research lab. With the DACTYL system, the virtual manipulator can learn through trial and error. This data is transmitted to real-life dexterous robotic hands, which, through human-like learning, are able to grasp and manipulate objects more efficiently.


This process, also known as deep learning, is the next step in robotic AI. Hopefully, through a process of trial and error, like the DACTYL system, the robot can learn to perform more and different tasks in different environments.



Edge Intelligence


Simply put, edge computing means processing data as close to its source as possible for better access and prioritization of data. Rather than being a "dumb" sensor like a traditional microphone or camera, it uses smart sensors such as microphones equipped with language processing capabilities, humidity and pressure sensors, or cameras equipped with computer vision.


Edge computing can be combined with the above technologies. As a result, the robotic arm can read data through smart sensors and 3D vision systems and send it to a server with a human-machine interface (HMI), where it can be retrieved by a worker.


Using edge systems reduces data transfers to and from the cloud, easing network congestion and latency, and allowing computations to be performed faster. These Industry 4.0 innovations will be used to improve the latest end-of-arm tooling hardware systems, such as grippers for robots or clamping systems for machining centers, making these hardware systems more accurate every year.


We should expect to see more creativity and change in the field of industrial robotics. Improved vision systems, AI, and edge systems can also be combined to help ensure that manufacturers and their robots continue to thrive for years to come.




Source: Control Engineering Network


China Transmission Network