The United States Department of Defense (DOD) is conducting live tests in an attempt to leverage generative artificial intelligence in its decision-making process, according to a report by Bloomberg.

Generative AI is a type of artificial intelligence that can create new content—such as text, images, or music—using prompts. Strohmeyer said the military exercises that run until the end of July aim to determine AI's use in decision-making and regarding sensors and firepower.

“It was highly successful. It was very fast,” U.S. Air Force Colonel Matthew Strohmeyer told Bloomberg. “We are learning that this is possible for us to do.”


The AI tools, Strohmeyer told Bloomberg, could handle a request that included processing secret-level and classified data—which would take humans hours or days to complete—in just 10 minutes. Granted, the U.S. military isn't about to hand over control to an AI chatbot, but he told the outlet that near-term use is possible.

"That doesn't mean it's ready for primetime right now," Strohmeyer said. "But we just did it live. We did it with secret-level data."

Military testers say the exercises the military is putting AI large language models through include responses to a range of global crises, like a Chinese invasion of Taiwan. In addition to external threats, Strohmeyer says the military is also working with developers to test the trustworthiness of AI due to its habit of "hallucinating."

Hallucinations refer to instances when an AI generates untrue results not backed by real-world data. AI hallucinations can be false content, news, or information about people, events, or facts.

Bloomberg says the U.S. military used a tool called Donovan by developer Scale AI to determine the outcome of a hypothetical war between the United States and China over Taiwan. In May, Scale AI was chosen by the U.S. Army for its Robotic Combat Vehicle (RCV) program. This test, the outlet said, was based on over 60,000 pages of American and Chinese military documents and open-source information.


The AI responded that the U.S. would need to use a full-scale attack, including ground, air, and naval forces—but even after doing this, it would struggle to swiftly immobilize the Chinese military.

In June, the DOD published a list of companies awarded lucrative government contracts, including several that work with artificial intelligence, such as AI Signal Research, Booz Allen Hamilton, and Lockheed Martin.

The idea of the U.S. military working with AI may invoke images of Skynet from the 1984 action film, "The Terminator," an AI system created for the U.S. Department of Defense that launches a nuclear holocaust. Another cinematic example is the military AI Joshua from the 1983 techno-thriller "WarGames."

Still, the DOD says it prioritizes ethical considerations and transparency in its approach to artificial intelligence.

"We both want to do them in a safe and responsible way, but also want to do them in a way that can push forward the cutting edge and ensure the department has access to the emerging technologies that it needs to stay ahead," Michael C. Horowitz, director of the emerging capabilities policy at the DOD, said in July.

The Department of Defense did not respond to Decrypt's request for comment.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.