Summit between Purdue, Department of Defense looks at quantifying trust in a machine's purpose

WEST LAFAYETTE, Ind. — Going about quantifying trust, especially between humans and technology, seems like it'd be an impossible task. For the U.S. Department of Defense, however, that's exactly what it intends on doing.

The Summit on Trusted Autonomy Research and Technology was held this week at the Shively Club in Ross-Ade Stadium. This summit – presented by Jaret Riddick, principal director for autonomy, the National Security Innovation Network, the Purdue Center for Innovation in Control, Optimization and Networks and The Johns Hopkins Institute for Assured Autonomy – aimed to collect ideas on how to make this quantifiable trust a reality.

"This is a really complex problem," Shreyas Sundaram, an associate professor of electrical engineering at Purdue and co-director of the Center for Innovation in Control Optimization and Networks, said. "We need all the stakeholders, all the experts to really weigh in on this problem because we need to start thinking very carefully about how to make this real, right?"

Regarding the idea of what it means to "trust" a machine to carry out its intended purposes, Sundaram elaborated on that importance.

"So when we talk about 'Do you trust your machine?' What does that even mean? So in order for people, companies, the government to feel (trust in) these devices...it's not enough to say 'I trust it.' We need to have numbers associated with it. You need to be able to say 'Here's why I trust it. Here's why I quantify that trust.' "

This sense of trust between humans and machines is especially important to the DoD due to the critical and sometimes dangerous roles autonomous technology and AI play in war and militaristic settings. The life of soldiers or the sensitivity of vital information can be in autonomous technology's hands, figuratively or literally.

As such, the START summit was meant to be somewhat of a kickoff - or start - to the DoD's initiative known as OPTIMA, or the "Operational Trust in Mission Autonomy."

"The main future goal (with OPTIMA) is to deliver trusted autonomy on the battlefield for the warfighter in complex and contested environments." Jaret Riddick, principal director for Autonomy in the Office of the Undersecretary of Defense for Research and Engineering, said. "To deliver that (trust) as a product so that we can quantify it and use it as a battlefield asset."

Progress on this initiative in the near future looks like defining the problem as well as the steps to overcome this problem. This will be done through the DoD with support of organizations such as NASA, the Department of Homeland Security and companies of all sizes.

"Industry, academia, international partners – those are folks that we want to engage," Riddick said. "And so, through the OPTIMA initiative, we're designing all these sorts of engagements and interactions to raise awareness, to get feedback from our academic and industry partners about the path forward and to also understand where those key alliances are that are gonna make us more capable in the future."

While the START initiative was meant to raise awareness of the OPTIMA initiative, work on building and quantifying trust between humans and autonomous robots/AI is already well underway, especially at institutions such as Purdue University.

"The name of the meeting is the 'Summit on Trusted Autonomy Research Technology,' which stands for START," Riddick said. "But that doesn't imply that we're starting in an area where there's nothing. There's so much research. The room is full today because these researchers, these DoD folks, have been working in this space for quite sometime."

Purdue University, and especially ICON, has a particular interest in developing autonomous technology. This helps join the efforts of the university along with the DoD's OPTIMA efforts.

"Broadly speaking," Sundaram said, "ICON, and Purdue in general, is very invested in helping to grow autonomous systems. Autonomous systems are gonna be everywhere. They're already prevalent, but they're gonna become even more so and these are really complex systems.

"...There's a huge amount of research going on at Purdue and within ICON to make these systems work better. Be more safe, more reliable. And so OPTIMA (is) meant to help transition to a future where we do have trusted autonomy working side by side with humans...We have a lot of strength here."

Margaret Christopherson is a reporter for the Journal & Courier. Email her at mchristopherson@jconline.com and follow her on Twitter @MargaretJC2.

This article originally appeared on Lafayette Journal & Courier: DoD summit looks at quantifying trust in a machine's purpose