How AI-Powered Machines Could Transform Excavation in Complex Terrains

10/28/2024 Bruce Adams

University of Illinois Urbana-Champaign CS professor Kris Hauser is leading a team that received a $963,000 NSF grant to leverage machine learning—specifically graph neural networks (GNNs)—to help robots efficiently predict and navigate terrain interactions. 

Written by Bruce Adams

How can robots dig through sand, soil, rocks, and combinations of these materials to create functional structures? A team from the University of Illinois Grainger College of Engineering, led by computer science professor Kris Hauser, aims to answer this question. Backed by a $963,000 NSF grant, the team is leveraging machine learning—specifically graph neural networks (GNNs)—to help robots efficiently predict and navigate terrain interactions. This research has the potential to revolutionize how robots "feel" and "see" the environment, combining tactile feedback with camera-based mapping.

The team’s extensive experience in robot learning has highlighted an important gap: GNNs, though promising, have largely been used to predict the dynamics of small objects in confined spaces like tabletops. His team, however, aims to expand this approach to larger, more complex environments where every aspect of the terrain can shift and deform.

Headshot of Kris Hauser
Kris Hauser

“So, we thought, why not try to predict behavior in large, deformable environments?” Hauser explains. “We came up with some interesting ideas on how to extend modeling to these real-world conditions, and we’re developing a learning-based simulator that can model how Earth moves when a robot scoops, dumps, or smooths it.”

In collaboration with co-principal investigator Yunzhu Li from Columbia University, the team is working on new ways to model how robots interact with terrain during activities like scooping, dumping, and shaping. Their focus is on learning the dynamics of these key areas to improve robot performance.

“The goal is to make GNN models versatile enough to adapt to a variety of tasks, from large-scale excavation to intricate grading, and on multiple materials.” Hauser says. “One of the key components of learning multi-material models is being able to observe how something has interacted and understand what material generated that behavior,” he adds. “For example, if a robot encounters unexpected resistance while digging, the system will adapt by identifying the material—perhaps compacted soil mixed with rocks—and adjusting its approach in real time. This adaptability is critical for robots working in unpredictable environments.”

The project’s ambitions are matched by the hardware being developed. The team is building a 3- to 4-foot robotic excavator with a powerful GPU onboard to handle the intensive computing required for these operations. Hauser is also optimistic about scaling down the technology,  mentioning plans for a smaller experimental platform — an RC excavator used by hobbyists— that will still pack enough mobile GPU power to perform complex tasks without relying on cloud computing. Additionally, the team plans to release open-source software that can model and manipulate granular materials, expanding access to cutting-edge research tools.

The team’s work aims to make robots more self-sufficient, especially in remote areas where internet connectivity may be limited. By embedding powerful computing directly into the machines, these robots could one day tackle large-scale construction and excavation projects with minimal human oversight.


Grainger Engineering Affiliations

Kris Hauser is an Illinois Grainger Engineering professor of computer science, electrical and computer engineering, and mechanical science and engineering. 


Share this story

This story was published October 28, 2024.