Deep Dive into AI for Defence and Military #1: radars, sensors, and transparent battlefield
Introduction to the data from the modern battlefield and what AI can do with it
I am starting a new series related to the intersection of two topics of interest: the old passion for technology and the newly discovered relatedness to warfare. I am a civilian born in Ukraine and have lived in Western Europe for many years, so I have many concerns about the war in Ukraine and the possibility of it coming to the European Union. I want to understand better military technology and help develop it, so we can make a positive impact to bring and maintain peace in the world.
As a new person in the field, I have encountered the following challenges:
- availability of the data due to the low levels of digitalization and overall security reasons of the domain — I plan to address it with mathematical modeling and simulation skills until seeing real data from the action
- big picture view (how a specific data-driven decision impacts tactical and strategic objectives) — I have found my preliminary answers in the amazing book of Jack Watling called “The Arms of the Future” which will serve as a structure for my blog posts
In this article, I aim to describe the main sensors used on the modern battlefield, what related data looks like, and what AI can do with it. As always, all the code is on my GitHub and you can experiment with the simulations in the HuggingFace space. If you’re working on a related project and are open to collaboration — let’s connect and discuss what we can do together.
Surveying the sensors on the battlefield
AESA radars
ChatGPT describes AESA (Active Electronically Scanned Array) radars as follows, which correlates with Wikipedia and the illustrations above:
AESA is like a super camera that uses radio waves to see in many directions at once, without moving. It’s really fast at checking what’s around, which is great for airplanes and ships to spot things quickly. AESA works by using a bunch of small antennas that send and receive radio waves. Each antenna can be controlled on its own, so the system can point its “view” in different directions without having to move physically.
To make a simplified radar data simulation, we will need:
- Moving object specifications (we will simulate a missile, plane, and UAV): position, velocity, radar cross section (a measure of how detectable an object is with radar)
- Radar specifications: radar position, radar range, radar frequency (The operational frequency of the radar, which is used to calculate the Doppler shift)
Just a reminder — code is on my GitHub and you can experiment with the simulations in the HuggingFace space, and this is how a simple simulation will look like:
If you want to dive even deeper into real data produced by AESA radars, I recommend checking out the SAAB repository with much more detailed mathematical modeling. If we look for AI and ML applications to AESA radar data, we will inevitably find:
- Moving target detection and tracking, for example, drone detection and classification
- Adaptive beamforming using deep learning (the most interesting and non-trivial application of replacing iterative algorithms with a single forward pass of a neural network, I did similar work in financial simulations a while ago)
It’s easy to notice, how we can attack the moving target identification problem with our simulated data, even when the profile of the trajectory and velocity can be similar, we still can see clear differences in SNR and Doppler effects which should be in our feature sets for classification problem with any ML algorithm including simple kNN models when we don’t have a lot of data.
Signal intelligence / Spectrometers
Let’s ask ChatGPT again for a simple explanation:
In military electronic warfare, signal intelligence is like using high-tech listening devices to overhear enemy plans and movements. Devices like the Russian Torn-MDM act as advanced listeners, able to pick up various electronic signals to gather information or disrupt the enemy’s communications. Think of it as tuning into multiple radio stations at once to figure out where the broadcasts are coming from and what they’re saying, but with the added twist of sometimes being able to jam the signal or send back fake messages to confuse the enemy.
Let’s simulate the data similar to this paper that demonstrates different kinds of events by their ultraviolet (UV) light signatures (Armor-Piercing Round launch, High-Explosive Round launch, RPG Launch, TNT Explosion):
Again, we can see even visually, how different are the spectral profiles of different events. How we can apply AI to such data and events?
- Threat identification and classification — the most obvious application, ML algorithms can be trained to distinguish between different types of emissions (like those from an APFSDS launch vs. an RPG launch). Classification models can be trained on labeled datasets of emission signatures to recognize and categorize threats in real time.
- Predictive maintenance — instead of analyzing enemy’s events, you use emission data from friendly assets, predictive models can forecast when maintenance on military equipment is needed by detecting subtle changes in their emission signatures, indicative of wear or impending failure
Of course, sensor fusion — combining the data of different measurements about the same events will boost the performance of the AI systems. The code for the simulations above is on my GitHub and you can explore the simulations in the HuggingFace space.
Acoustic panels
What ChatGPT can tell us about them?
Military acoustic systems are specialized tools that detect and analyze sound to identify enemy movements or vehicles. They use advanced microphones to pick up noises from far away and figure out what’s making them, like distinguishing between a truck and a tank. This helps soldiers know where the enemy is and what they’re up to without being detected themselves, providing a strategic advantage in planning and defense.
There are civilian applications of this tech (like gunshot detection at Lafayette Police Department and acoustic drone detectors for border control, sports events security, etc). Let’s study drone detection applications a bit more in detail and study the data, assuming a civilian scenario where we need to detect a drone in the wild, when we have cars, nature, and people speaking around. In the DronePrint paper from 2021, this is how the data profile looks like:
Even with the naked eye, it’s relatively easy to separate “natural” and “mechanical” sound profiles, and among mechanical, drone spectrums and other spectrums are indeed different. In the paper, they also demonstrate that they can identify drones with AI based on this data profile with high accuracy:
Laser vibrometers
Laser vibrometers in the military are tools that use lasers to measure vibrations from a distance. They can detect tiny movements on the surface of objects, like a vehicle or building, by bouncing a laser beam off them. This helps figure out if something is moving or if machinery is running inside, without needing to get close. It’s like having a superpower to feel vibrations from far away, giving soldiers information about enemy activities without being seen.
No need to quote the author of the explanation above. One of the very common applications of laser vibrometers is landmine detection (as can be seen in the illustrations above). Let’s simulate the data and study what it will look like (with code on my GitHub and interactive demo in the HuggingFace space):
What AI and machine learning can do with this data?
- Detecting different types of landmines for separate treatment and extraction or destruction (to be fair, can be done without AI as well)
- Detecting vehicle engines and classifying them from distance with 96%+ accuracy
- Another NATO paper on a landmine and vehicle detection
Again, it’s worth mentioning that the best approach will be with sensor fusion — when different sensors augment and verify each other signals for better accuracy and overall system reliability.
Electro-optical sensors
Image-intensifying, infrared, and thermal imaging systems are like special glasses that let soldiers see in the dark, through smoke, or even spot warm objects hidden in the environment. They work by amplifying light, detecting heat signatures, and creating detailed images based on temperature differences, providing a clear view of surroundings under conditions where normal vision fails. This technology gives soldiers a crucial advantage, allowing them to detect and monitor enemies unseen with the naked eye.
Again, this is a very simplified ChatGPT explanation and we don’t need to simulate the visual data to understand what it looks like. However, it’s worth mentioning, that with the introduction of cheap cameras on the battlefield, many problems mentioned above are attempted to be solved with the optic data feed only:
- scatterable landmines can be detected from the UAV cameras with 70%+ accuracy
- drones can be detected with very high accuracy with the cameras and also can be automatically navigated with them
- very recent computer vision architectures like CLIP allow military vehicle detection even without training with high precision and recall (zero-shot detection)
Cellular phones
Jack Watling also mentioned cellular phones as a data source on the battlefield that is becoming more and more common, used not only by military personnel but also by civilians. Historically, civilians are largely observed as passive observers or victims of fighting, but today with how they share information they have become participants.
From surprise to uncertainty
Given the presence of all the above-mentioned sensors on the ground and in the air coupled with civilian infrastructure, Jack is making the point that such a thing as a surprise in a war today might even not be achievable. He shares an example of a brigade with 80–120 tanks, 30–70 artillery pieces, 800-1400 infantry units, and supporting engineers, logistics, maintenance, and medics.
Any radar system will detect them by 50km easily + even earlier civilians will record them and publish them on the internet, and we’re not even talking about satellite images in the blog. Instead of talking about the element of surprise, Jack offers to talk about the uncertainty:
- Stand-off and stand-in sensors are creating ambiguity.
- Units need to understand their own and adversary sensors and signatures.
Next steps
In this article, we have reviewed the main sensors present on the modern battlefield today, checked out what the data looks like, and how AI can help analyze this data. However, we shouldn’t forget that our adversary is also using the same devices, potentially in the very same spectrum that are emitting and they’re trying to jam our visibility of the battlefield and we are trying to do the same to them. The following article will be about contesting the electromagnetic spectrum to be able to:
- detect and destroy stand-in sensors to create ambiguity
- understand the enemy’s planning and positions
- coordinate our forces and not let the enemy to coordinate theirs
Stay tuned and let’s stay connected on Linkedin and Twitter/X!