You can lead a virtual robot to a refrigerator, but you can’t make it pull out a drink.
ManipulaTHOR adds a highly articulated robotic arm to the institute’s AI2-THOR artificial intelligence platform — which should provide lots more capability for testing the software for robots even before they’re built.
AI2-THOR was programmed to find its way through virtual versions of indoor environments, such as kitchens and bathrooms. It could use computer vision to locate everyday objects, but the model didn’t delve deeply into the mechanics of moving those objects. Instead, it just levitated them, as if by video-game magic.
Now AI2-THOR is getting real.
“Imagine a robot being able to navigate a kitchen, open a refrigerator and pull out a can of soda,” AI2 CEO Oren Etzioni said in a news release. “This is one of the biggest and yet often overlooked challenges in robotics, and AI2-THOR is the first to design a benchmark for the task of moving objects to various locations in virtual rooms, enabling reproducibility and measuring progress.”
Etzioni said it’s taken five years to get AI2-THOR to this point.
“We can now begin to train robots to perceive and navigate the world more like we do, making real-world usage models more attainable than ever before,” he said.
Kiana Ehsani, an AI2 research scientist who worked on ManipulaTHOR, said the enhanced model could help train robots to assemble manufactured goods in factories, sort packaged goods in warehouses, or even prepare for space missions.
“This can generalize to any of that,” Ehsani told GeekWire. “Not at the moment … but we think of this environment as a framework that can enable researchers to develop models for any kind of object manipulation. It doesn’t have to be just in kitchens, or just in indoor scenes, or just in houses.”
ManipulaTHOR’s virtual robotic arm is designed to simulate the capabilities of the Kinova Gen3 modular robotic arm, a commercially available product that features six degrees of freedom.
Researchers can program the virtual arm in AI2-THOR 3.0 to move obstructions out of the way, grasp the objects they need to manipulate and move them how they’re supposed to move in the real world.
If the articulated arm is in the wrong configuration, and the robot smashes into a virtual faucet while turning on the kitchen tap, AI researchers can fine-tune their software to keep that from happening in the real world. More importantly, the computer models that are run on AI2-THOR 3.0 should be better-prepared to cope with novel situations.
“We showed that if you train on a subset of the scenes that we have, and then take this robot and put it in a totally new environment that it’s never seen, it is still able to avoid obstacles and take objects to the goal location,” Ehsani said.
Now that ManipulaTHOR has been released, Ehsani and her AI2-THOR teammates are inviting researchers to take part in RoboTHOR Challenge 2021, held in conjunction with the Embodied AI Workshop at June’s Conference on Computer Vision and Pattern Recognition.
“The challenge will be in simulation, and the task will be to move toward an object, pick it up and then move it toward the goal location … without interfering with the rest of the room,” Ehsani said.
Of course, the acid test will come when real-world robots are programmed using the computer models developed for AI2-THOR’s virtual robots. Even before ManipulaTHOR’s release, AI2 was planning to do that as part of last year’s RoboTHOR Challenge — but because of the coronavirus pandemic, the real-world tests had to be postponed.
“That’s on the list, for sure, hopefully in the very near future,” Ehsani said.