Military Tech Execs Tell Congress an AI Pause Is 'Close to Impossible'

Palantir COO Shyam Sankar speaks during the CES 2023 media day at Mandalay Bay Convention Center in Las Vegas, NV on January 4, 2023.
Palantir COO Shyam Sankar speaks during the CES 2023 media day at Mandalay Bay Convention Center in Las Vegas, NV on January 4, 2023.

Military tech executives and experts speaking before the Senate Armed Services Committee Wednesday said growing calls for a “pause” on new artificial intelligence systems were misguided and seemed “close to impossible” to enact. The experts, who spoke on behalf of two military tech companies as well as storied defense contractor the Rand Corporation, said Chinese AI makers likely wouldn’t adhere to a pause and would instead capitalize on a development lull to usurp the United States’ current lead in the international AI race. The world is at an AI “inflection point,” one expert said, and it’s time to step on the gas to “terrify our adversaries.”

“I think it would be very difficult to broker an international agreement to hit pause on AI development that would actually be verifiable,” Rand Corporation President and CEO Jason Matheny said during the Senate hearing Wednesday. Shyam Sankar, the CTO of Peter Thiel-founded analytics firm Palantir, agreed, saying a pause on AI development in the US could pave the way for China to set the international standards around AI use and development. If that happens, Sankar said he feared China’s recent regulatory guidelines prohibiting AI models from serving up content critical of the government could potentially spread to other countries.

Read more

“To the extent those standards become the standards for the world is highly problematic,” Sankar said. “A Democratic AI is crucial.”

‘We must spend at least 5% of our budget on capabilities that will terrify our adversaries’

Those dramatic warnings come just one month after hundreds of leading AI experts sent a widely read open letter calling for AI labs to impose an immediate six-month pause on training any AI systems more powerful than OpenAI’s recently released GPT-4. Prior to that, human rights organizations have spent years advocating for binding treaties or other measures intended to restrict autonomous weapons development. The experts speaking before the Senate Armed Services Committee agreed it was paramount for the US to implement smart regulations guiding AI’s development but warned a full-on pause would do more harm than to good to the Department of Defense, which has historically struggled to stay ahead of AI innovations.

Sankar, who spoke critically of the military’s relatively cautious approach when it came to adopting new technology, told lawmakers it’s currently easier for his company to bring advanced AI tools to banking giant AIG than to the Army or Airforce. The Palantir CTO contrasted that sluggish adoption with Ukraine’s military, which he said learned to procure new software in days in just days or weeks in order to fight off invading Russian forces. Palantir CEO Alex Karp has previously said his company offered services to the Ukrainian military.

Unsurprisingly, Sankar said he would like to see the DoD spend even more of its colossal $768 billion budget on tech solutions like those offered by Palantir.

“If we want to effectively deter those that threaten US interests, we must spend at least 5% of our budget on capabilities that will terrify our adversaries,” Sankar told the lawmakers.

Others, like Shift5 Co-founder and CEO Josh Lospinoso said the military is missing out on opportunities to use data already being created by its armada of ships, tanks, boats, and planes. That data, Lospinoso said, could be used to train powerful new AI systems that could give the US military an edge and bolster its cybersecurity defenses. Instead, most of it currently “evaporates in the ether right away.”

“These machines are talking, but the DoD is unable to hear them,” Lospinoso said, “America’s weapons systems are simply not AI ready.”

Experts soured on open source AI, but they’re open to ‘data poisoning’ Chinese AI

Maintaining the military’s competitive edge may also rely on shoring up data generated by private US tech companies. Matheny spoke critically of open-spruced AI companies and wanted that well-intentioned pursuit of free-flowing information could inadvertently wind up aiding military AI systems in other countries. Similarly, other AI tools believed to be “benign” by US tech firms could be misused by others. Matheny said AI tools above a certain, unspecified threshold probably should be allowed to be sold to foreign governments and should have some guardrails put in the palace before they are released to the public.

In some cases, the experts said the US military should consider going a step further and engage in offensive actions to limit a foreign military’s ability to develop superior AI systems. While those offensive actions could look like trade restrictions or sanctions on high-tech equipment, Lospinoso and Matheny said the US could also consider going a step further a “poisoning” an adversary’s data. Intentionally manipulating or corrupting datasets used to train military AI models, in theroy at least, could buy the Pentagon more time to build out its own.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

More from Gizmodo

Sign up for Gizmodo's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.