Americans taught robots to understand commands and navigate the terrain

Americans taught robots to understand commands and navigate the terrain
Husky | Image from Clearpath robotics

The U.S. Army Research Laboratory (ARL) has announced the development of a system for surveying terrain, recognizing objects and marking them, as well as performing voice Commands. According to MIT Technology Review, in the future, such a system will allow organizing mixed units “human-robots” that on the battlefield will be able to interact in the same way as fighters interact with each other. Partially successful tests of the system on several robots took place in October 2019.

In recent years, several research programs and developments have been underway for the interests of the U.S. military, the ultimate goal of which is to create robots, able to accurately understand human speech, follow people’s commands and report on their performance. At the same time, such robots, equipped with artificial intelligence systems, will have to be able to explain to the person why they made a decision during the task.

Last September, the U.S. Department of Defense published a roadmap for the development of military robots. The document describes a vision of what unmanned aerial vehicles, ground, surface and underwater robots should become by 2042. The U.S. military’s top priorities for the development of such systems included open architecture, modular design, the high degree of automation and forming mixed units of humans and robots.

Several organizations are participating in the ARL-controlled terrain recognition and voice recognition project, including the Massachusetts Institute of Technology, Carnegie Mellon University Robotics Institute, QinetiQ North America, General Dynamics Land Systems, NASA’s Jet Propulsion Laboratory, University of Central Florida and the University of Pennsylvania. Research is carried out as part of the RCTA (Robotics Collaborative Technology Alliance) program.

The system developed as part of a project for robots consists of a computer unit based on the Intel Core i7 quad-core processor, lidar and optical-electronic cameras. Several algorithms have been developed for the system using neural networks. As a result, with the help of such a system, the robot, moving, can draw up an overview map of the area, recognize the objects located on it, assign them semantic marks, and Get advanced information for some objects. In addition, the system allows you to recognize voice commands.

In particular, a robot equipped with a new system can be given the command: “Look at what is happening behind the building.” If several objects in the overview map of the robot are marked with the semantic label “building”, the robot will specify which structure is meant. During testing, the new system was equipped with a small four-wheeled Husky robot with a manipulator. He was given commands to clear the area, remove a specific object from the road or conduct reconnaissance. If necessary, remove an object, the robot had to independently determine how to best capture this object with a manipulator.

A total of three tests were carried out. Two of them were completely successful: the robot understood the voice orders and successfully executed them. The third test was partially successful: although the robot was able to recognize commands, the developers needed to restart it – while performing the task the navigation system was hovering.

In this case, the system created by specialists partially relies on hardcoding: the parameters of recognized objects are pre-set by programmers and are simply retrieved by the robot from the database. For example, such hardcoding, for example, includes prescribing the parameters of a car – the robot can independently recognize the car and assign it the corresponding semantic label, but the parameters of the car (number of wheels, presence of an engine, number of doors, headlight positioning) are hard-coded in advance by programmers.

At the same time, researchers are already developing an algorithm to automatically determine the parameters of the recognized object. It is not yet known when exactly such an algorithm may appear.

Ultimately, the military expects to receive a system that can be given not only unequivocal orders, such as “look at the far right car,” but also ask questions related to assessing the probability of completing a mission: “I think you can complete this task. What do you think?”. The RCTA project is designed for ten years. As a result of the program, the US military expects to receive robots that can be used to reconnoiter or clear the path from mines, interacting with them no worse than with trained dogs.

SHARE
Previous article11 million years old Fossils suggested a new scenario for the transition to upright posture
Next articleAdobe shows the feature capable of guessing if a face has been modified with AI
Aakash Molpariya
Aakash started in Nov 2018 as a writer at Revyuh.com. Since joining, as writer, he is mainly responsible for Software, Science, programming, system administration and the Technology ecosystem, but due to his versatility he is used for everything possible. He writes about topics ranging from AI to hardware to games, stands in front of and behind the camera, creates creative product images and much more. He is a trained IT systems engineer and has studied computer science. By the way, he is enthusiastic about his own small projects in game development, hardware-handicraft, digital art, gaming and music. Email: aakash (at) revyuh (dot) com

LEAVE A REPLY