In the first 24 hours of the assault on Iran, the US military struck more than 1,000 targets, nearly double the scale of the “shock and awe” attack on Iraq over two decades ago. This acceleration was made possible by AI systems that speed up the targeting process. Chief among them is the Maven Smart System.
In her new book, Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare, journalist Katrina Manson investigates the development of Maven from its inception in 2017 as an experiment in applying computer vision to drone footage. The project spurred employee protests at Google, the military’s initial contractor, prompting the company to back out. Pushed forward by a Marine intelligence officer named Drew Cukor, whose story forms the backbone of Project Maven, the system ended up being built by Palantir and draws on technologies developed by Microsoft, Amazon, Anthropic, and others. Now used across the US armed forces and recently purchased by NATO, Maven synthesizes satellite imagery, radar, social media, and dozens of other data sources to identify and target entities on the battlefield. It also speeds up what’s called the “kill chain.”
Maven combines computer vision with a sort of workflow management system that finds targets, pairs them with weapons, and allows users to quickly click through the other steps of a targeting cycle. A process that once took hours can now be completed in seconds. An official tells Manson that the technology has allowed the US to go from hitting under a hundred targets a day to a thousand, and with the addition of LLMs, up to five thousand targets a day.
One of the thousand targets struck on the first day of the Iran war was a girls’ school, killing more than 150 people, mostly children. The school had previously been part of an Iranian naval base, yet it was listed online as a school and playgrounds were visible on satellite imagery. While much of the coverage after the strike focused on possible hallucinations by Claude, the technology historian Kevin Baker wrote in The Guardian that Maven and the acceleration it enabled is the more relevant place to look. “A chatbot did not kill those children,” he wrote. “People failed to update a database, and other people built a system fast enough to make that failure lethal.”
The pace of war is set to accelerate further. Manson uncovers military programs to develop fully autonomous weapons — including an explosive-laden drone Jet Ski — capable of targeting and destroying targets on their own.
I spoke to Manson about Maven and how AI is changing warfare.
This interview has been condensed and edited for clarity.
Colonel Cukor was an early and determined proponent of AI. Can you say a bit about him and what his initial motivations were?
He is chief of Project Maven, so he was the day-to-day doer and leader, but he also had this very long-term vision, which comes from his frustration that US military operators in Afghanistan were equipped with very poor intelligence tools. There was this idea that the US essentially fought that war 40 times over, every six months, because information wasn’t being handed over [when troops rotated in]. He was frustrated that data was in Excel and PowerPoint and he wanted an analytic tool that would bring intelligence to the frontline military operators. But he also had this vision for what he called “white dots” — that there would be white dots shown on a map infused with intelligence information, like a coordinate, what is there, the elevation, what is known about it. And this becomes one of the driving forces of what he tries to create through Project Maven.
How was Maven initially conceived in the military, was it as this interface and information management system?
It comes out of this project called Project Maven that starts in 2017. The actual project already existed and had already got a funding stream. It was to use AI against satellite imagery, but then it got repurposed for drone video imagery. This is because the US is thinking about how to develop AI for technologies for any potential conflict against China. They had this idea that eventually war would run faster than humans could think, so they wanted to bring AI into this. The initial idea proposed by Colonel Cukor is to apply AI to drone video footage. They were sometimes managing to analyze as little as 4 percent of the collection, so they wanted AI essentially to take the place of human eyes in analyzing what was there, but it was always bigger.
The public first heard about Maven with the Google protests in 2018, and I remember Google at the time saying that this technology would not be used to kill people. But it sounds like targeting was always the intention?
A spokesperson from Google at the time said that flagging images for review on the drone feed with the help of AI was intended to save lives and was for non-offensive uses only. That is not what my reporting shows. My reporting shows that many of the US military operators were motivated by the aim to save US lives and reduce civilian harm, so in that sense, it is “not offensive” because you’re analyzing intelligence information. But in the wider sense and very quickly, in the very real sense, AI target selection was intended for targeting.
I asked someone in the book if targeting offensive weapon strikes were intended to be part of Project Maven, and he replied, “yeah, of course, it’s not like we’re doing it for kicks. The goal of the intel is to take out high-value targets.”
When the Google deal falls apart, that’s when Palantir steps in. Can you tell me about Palantir’s role in the project?
Two things happen. Microsoft and AWS [Amazon Web Services] take a much bigger role in producing the algorithms and also in the compute, and alongside that, Cukor goes to Palantir and says, “Can you help?” He’s pitching this idea of the white dots on a screen. He has this 10-year vision for how the US military will remake themselves, and they’ve been trying out algorithms, which at that stage are not very good at identifying anything, and are also having to sit in systems that aren’t fit for purpose. They had a lot of problems with users not believing in AI and finding the displays very distracting. So he wants a user interface that will please the user.
So he pitches to Palantir that they create a user interface, which actually Palantir doesn’t want to do. I’m told they didn’t believe that AI was going to take off, and they also didn’t want to just make a fancy user interface. They wanted to crunch the data. But that wasn’t initially what Cukor was pitching them and he was very persuasive. He also wanted them to be less arrogant, and he ends up counseling them on how to attempt to remake their reputation inside the Department of Defense and to get these contracts, which initially, I don’t think are worth much money. But today, nearly 10 years later, I’ve reported that Maven Smart System is going to become by the end of September a “program of record” and Palantir is the prime contractor, so in the end, it’s going to be lucrative for them.
Ukraine sounded like a pretty big inflection point in the development of these systems. What happened there?
This becomes a really important moment where the artillery fire team realizes that AI can help them speed up their operations and targeting. It becomes much more explicit that intelligence is going to feed into operations. When the US is supporting Ukraine, even before the invasion of Russia, the 18th Airborne Corps is over in Wiesbaden in Germany and very quickly they start to use computer vision on the Maven Smart System to figure out where the Russian positions are, where the tanks are, what is happening. The algorithms fail very quickly. The algorithms were used to the desert in the Middle East and in Afghanistan. The algorithms couldn’t recognize tanks and other features in the snow. They collect new satellite footage over the Russian tanks and other equipment and send them back to the US to retrain the algorithms really quickly, so they become much better at spotting tanks.
The US starts sending what they end up calling “points of interest” to the Ukrainians, who then use that to target Russian equipment and personnel. The language of “points of interest” is interesting because the US is trying to thread this needle to provide support to the Ukrainians without becoming seen in Russia’s eyes as a direct participant in the war. So they evolved this idea that a “target” is something that has gone through a process, and they are giving the Ukrainians everything just shy of that. I’m able to report that at the high point on one day in 2022, the US passes 267 points of interest to the Ukraine.
What are the parts of the targeting process that are getting automated that cause that kind of acceleration?
The US military would say nothing is yet automated, because there is this extra stage of targeting, which is really key, which is the legal decision to strike something. In the case of why the kill chain is speeding up, what I’ve been told is that a lot of the processes involved in getting permission to strike a target have traditionally been extremely analog and slow, involving telephones and swivel chairs. So this is part of shifting this process onto digital platforms and then eventually getting to automate it.
The 18th Airborne Corps had humans at six key steps. So the human decides when and how to shoot at a target. They assess what’s called an operational approach. They assess the data collected, they decide to act, communicate the decision, execute the fire, and then communicate what happened. And then with the arrival of Maven’s AI, they reduced the human role in the loop to only two places: the decision to act and the action itself. They can supervise the machine making the decision during the automated collection process, but the assessments throughout would all be AI enabled. Even at the NGA [National Geospatial-Intelligence Agency], they are producing intelligence reports that no human eyes or hands have touched that are entirely AI generated. So there’s been this huge shift into really making data and the system king.
The other reason that they’re able to get to so many targets in a day is because the Maven Smart System is using large language models. I’ve reported [they’re using] Claude from Anthropic, and I was told it was helping speed up the processes. And Centcom [US Central Command] themselves said that with the help of AI, they were able to speed up processes that used to take days and hours down to as little as seconds. The commander, the US would say, is still making the decision. But I’ve also spoken to US military ethicists who say that there is a risk of the gamification of war, and that people may end up trusting the targets that they’re being offered on screen without understanding fully the data that’s supporting it.
Now, the pushback is that this is data that’s better tagged than ever been before, that this AI-based system, essentially being a database system, means that you can audit the data and go deep into it and also give headquarters a way of following what military operators at the edge are doing with much greater transparency and accountability than ever before. This enormous operation that the US has undertaken in Iran will ultimately be a case in point. And we’ll be looking for data and accountability about how the US has, in the end, used this platform.
There’s a technology scholar, Kevin Baker, who wrote a piece about how Claude got a lot of blame initially for the school strike in Iran. But he pointed to this longer term acceleration and said that these steps may have left time for deliberation or noticing errors or contradictory intelligence. I’m curious if there were concerns in the military that things were getting too fast?
There’s a really significant debate inside the US military about how far they should lean into this. Some are saying it’s inevitable, and others are really warning that that human assessment at the last minute is the thing that can save lives. And I don’t think that the debates proved out, but the direction of travel is clear in that the Maven Smart System is becoming a program of record. That Central Command commander is taking time out of these operations to go on to X and say that they are using AI and that they’re finding it helpful. Then you have people like retired Defense Secretary Jim Mattis saying that targeting is no substitute for strategy, that hitting a lot of things, essentially, doesn’t get you to victory.
There’s one example that I keep going back to in my mind, which is in 1999, when the US strikes the Chinese Embassy in Belgrade. In the analysis that the US offers publicly afterwards, they say that the embassy was incorrectly labeled on a map. The embassy had moved recently. The map hadn’t been updated. One map had; others hadn’t. Someone even tried to make a call because they got worried and wanted to check, but they weren’t able to reach someone in time.
In an example like that, if your systems flag a problem and they’re digitally connected, on the one hand, it could be much easier to raise anomalies, problems, risks of mistake. On the other, the target selection from what could be an erroneous targeting database could be made even quicker without those checks. So the decision that the US military makes about leaning into AI on the targeting cycle will only be as good as the data that is feeding it.











Leave a Reply