Scientists, including those of Indian origin, have developed robots that can follow spoken instructions, an advance that may make it easier for people to interact with automated machines at home and workplaces.
“The issue we’re addressing is language grounding, which means having a robot take natural language commands and generate behaviours that successfully complete a task,” said Dilip Arumugam, from Brown University in the US.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” Arumugam said.
For example, someone in a warehouse working side-by-side with a robotic forklift might say to the robotic partner, “Grab that pallet.” That is a highly abstract command that implies a number of smaller sub-steps – lining up the lift, putting the forks underneath and hoisting it up.
However, other common commands might be more fine-grained, involving only a single action: “Tilt the forks back a little,” for example. Those different levels of abstraction can cause problems for current robot language models, the researchers said.