DARPA to test AI-controlled jets in live-fly dogfights after successful simulations

[ad_1]

DARPA’s mission to develop AI fighter jets has moved closer to take-off.

Algorithms from the Military Research Agency shot down an Air Force pilot in virtual air combat last year. In February, the Pentagon’s “mad science” unit tested their performance as a team.

The battle pitted two friendly F-16s against a single enemy aircraft. Each fighter plane was equipped with a cannon for close range engagements and a missile for more distant targets.

Colonel Dan “Animal” Javorsek, program director at DARPA’s Office of Strategic Technology, said testing multiple weapons and planes introduced a new dynamic to the trials:

These new commitments represent an important step in building confidence in the algorithms, as they allow us to assess how AI agents deal with the clear restrictions on the shooting lanes put in place to prevent fratricide. This is extremely important when operating with offensive weapons in a dynamic and confusing environment that includes a manned fighter and also offers the potential to increase the complexity and teamwork associated with maneuvering two aircraft compared to an opponent.

DARPA also assesses how much pilots trust the systems. Javorsek said he installed sensors in a jet to measure physiological responses, such as where a pilot’s head is pointed and his eyes move:

This allows us to see how well the pilot checks range by looking outside the window and comparing that to the time spent on his combat management task.

They are now planning to test the AI ​​on real world airplanes. To do this, DARPA creates an aerodynamic model of an L-39 jet trainer, which the algorithm will use to make predictions and maneuver decisions.

Once the model is complete, the agency will start modifying the aircraft so that the algorithm can control it. Pentagon plans to test them in live air combat end of 2023 and 2024.

[Read: Iranian nuclear scientist allegedly assassinated via killer robot]

Critics, however, have questioned the value tests. The rule-based nature of air-to-air combat is well suited to algorithmic decisions, and the “perfect information” provided by simulators is not available in the field.

Even though AI worked just as well in reality, there has only been one aerial combat involving an American aircraft in the past 20 years.

A more pressing concern is the rush to develop autonomous weapons. An AI arms race could encourage countries to reduce security risks and even start an accidental war.

HT – The reader

Published March 23, 2021 – 18:42 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *