At the height of the exchange of accusations between the United States and China regarding the “Covid-19” disease, new signs of a war between the two countries appeared, the Artificial intelligence War, which lead us to ask: Is this technology ready to work in safety? And can military AI be deceived easily?
Algorithms deception
Although military AI technologies dominate military strategy in the US and China; But what sparked the crisis was that last March, Chinese researchers launched a brilliant, and potentially devastating, attack against one of America’s most valuable technological assets, the Tesla electric car.
A research team from the security laboratory of the Chinese technology giant “Tencent” has succeeded in finding several ways to deceive the artificial intelligence algorithms in the Tesla electric car by carefully changing the data, which are fed to the car’s sensors, and the team managed to trick and confuse the vehicle’s AI.
The team tricked Tesla’s brilliant algorithms capable of detecting raindrops on the windshield or following the lines on the road, operating the windshield wipers to act as if there was rain, and the lane markings on the road were modified to confuse the autonomous driving system so that it passed in the opposite traffic lane in violation of traffic rules.
After the success of the Chinese experiment, the team emphasized that it would be easy to deceive the “deep learning” algorithms, which sweep through various fields of applications such as face recognition and cancer diagnosis, if their weaknesses are discovered.
Protecting national security
The Pentagon’s proposed 2020 budget of $ 718 billion allocates $ 927 million for artificial intelligence and machine learning. Current projects include testing whether AI can predict when tanks and trucks need maintenance, as well as things at the forefront of weapons technology.
Early this year, the United States announced a grand strategy to harness AI in several areas of the military, including intelligence analysis, decision-making, vehicle independence, logistics, and weapons.
The Tesla electric car deception might not seem like a serious strategic threat to the United States; But what if similar technologies are used to trick drones, or software that analyzes satellite imagery, into seeing things that aren’t there or not seeing things that are there?
The reason for the Pentagon’s push for AI is in part due to a fear of the way competitors might use the technology.
Last year Jim Mattes, then Secretary of Defense, sent a note to President Donald Trump warning that the United States was already behind in terms of AI.
In July 2017, China formulated its Artificial Intelligence Strategy, declaring that “the major developed countries of the world take the development of artificial intelligence as a key strategy to enhance national competitiveness and protect national security.”
A few months later, Vladimir Putin announced from Russia ominously, “Whoever becomes a leader in (artificial intelligence) will become the ruler of the world.”
The ambition to build the smartest and deadliest weapons is understandable. But as it appears in the case of a Tesla electric car hack, an enemy who knows how the AI algorithm works could render it useless or even move it to act against its owners.
Deep learning networks
With the accumulation of military uses of AI, these weaknesses have garnered a lot of attention and raised many concerns. For example, minimal changes to an input image can cause a network to misclassify it, making it appear similar to someone; But it is completely different from the AI algorithm.
A defensive solution against hostile data can be the use of deep learning networks, multi-layered neural networks can recognize objects with unprecedented skill, and neural network training includes feeding data to process the different types of images and sensor data essential to military operations.
The Pentagon has begun to take notice, and in August of this year, the Defense Advanced Research Projects Agency (DARPA) announced several large AI research projects, including one focused on aggressive machine learning.
The backlash against the military use of AI is understandable. But it might miss the bigger picture. Even as people worry about intelligent killer robots, perhaps the biggest danger in the near term is an algorithmic fog of warfare, exemplified by aggressive military AI, a fog that even the smartest machines cannot study.