Welding has always been a physical, hands-on job — but that may be about to change. That’s because researchers from the University of Illinois Urbana-Champaign have developed a new mind-control system that makes it possible for someone to control a welding robot by transmitting mental instructions via an electroencephalography (EEG) cap. While robot welders are already used in industry, this innovation could help make the process more efficient, in addition to keeping human workers at a safe distance from the potentially deadly machines they work with.
“Welding is a high-skilled task,” Thenkurussi Kesavadas, professor of Industrial & Enterprise Systems Engineering, told Digital Trends. “A skilled welder can identify the exact joints that require welding based on drawings of the part. But when robots are used to weld, programming requires additional skill set and time. Our research is focused on automating robotic welding by using human knowledge about welding and computer vision to carry out automation.”
This human knowledge is gathered using the aforementioned EEG caps. As the user watches images on a computer screen of different joints that can potentially be welded, the brain-computer interface (BCI) recognizes intent, based on how the operator responds to the most appropriate option.
“At the current time, this is a tech demo,” Kesavadas said. “We are using BCI and A.I.-based computer vision to automatically weld in a simulated environment. Our plan is to try this at an industrial setting. The problem it will solve is that robotic welding of small batch sizes are not economical. In high-volume welding, the time required for programming is negligible compared to [the] cost of welding millions of parts. But welding using robots is generally not economical for a small number of parts since programming takes too much time. This technology is looking at innovative ways to use human skill to create a low-cost way of introducing robotics to welding.”
Going forward, the team will be exploring more advanced welding techniques, including a wider range of 3D geometries. “We will also look at integrating more modalities of input to help the operator,” Kesavadas said. “We are couple of years away from commercialization.”