Amid AI hype, SXSW explores ethics, implications of emerging tech disruption

AUSTIN (KXAN) — AI was a hot topic at the 2024 SXSW Conference, from AI marketing tools to AI-generated films, self-driving cars and more.

(KXAN Photos/Cora Neas)
(KXAN Photos/Cora Neas)
(KXAN Photos/Cora Neas)
(KXAN Photos/Cora Neas)

At the SXSW Creative Industry Expo, a booth run by the Office of Ethical and Humane Use and Salesforce asked expo attendees to determine tasks they would trust to AI, and which they would only trust to a human.

Will Artificial Intelligence help or will it harm?

For Humans, participants said analysis, creative work, decision-making emotional work, and performing final checks. For AI: starting a project, code writing, organizing and processing data and making recommendations.

(KXAN Photos/Cora Neas)
(KXAN Photos/Cora Neas)
(KXAN Photos/Cora Neas)
(KXAN Photos/Cora Neas)

However, a number of panels at the event covered topics around how those tools can be safely and ethically used.

Spotting AI deepfakes with AI-based tools

At a nearly full SXSW session March 8, Laura Plunkett, executive director of a startup incubator Comcast NBCUniversal’s LIFT Labs, asked Ben Coleman, CEO of deepfake detection platform Reality Defender, to talk about his company’s mission to safeguard reality from AI-created content.

Generative AI, or programs that can create text, images, sounds and videos from a user-provided prompt, are growing in complexity. With the rise of such tools and the ease of accessing them, comes real risks to individuals, businesses and society, according to Coleman.

“You’d have to be living under a rock for the last two years to not see a lot of the chaos happening around us with deepfakes and generative AI,” Coleman said.

Laura Plunkett (left), executive director of Comcast NBCUniversal’s LIFT Labs, and Ben Colman (right), CEO of Reality Defender, speak at a March 8, 2024 SXSW session. (KXAN Photos/Cora Neas)
Laura Plunkett (left), executive director of Comcast NBCUniversal’s LIFT Labs, and Ben Colman (right), CEO of Reality Defender, speak at a March 8, 2024 SXSW session. (KXAN Photos/Cora Neas)

Coleman and Plunkett discussed some negative uses of AI, such as faked video statements by world leaders and generated nude images of celebrities. However, neither took a stance on the technology’s morality, proffering Coleman’s tools as a solution.

“We’re really careful to avoid saying AI is good or bad. Many of our use cases are fantastic; we’re solving diseases and supply chain challenges,” Coleman said. “What we’re offering is another check to indicate to consumers that what they’re viewing was generated by AI.”

Reality Defender’s technology uses AI to detect, and alert its user, if content was generated by a program. Falsified images are a concern, but Coleman warns that voice cloning poses a massive fraud risk. Prior to launching Reality Defender, Coleman worked for an investment bank that was hit by a voice fraud attack.

“It’s a cat and mouse game,” Coleman said. “If you’re just thinking about your lives, the amount of decisions, money transfers and permissions that are committed through voice are astronomical. But by next year you’re going to have real-time video communications [attacks].”

AI and Ethics

On March 13, Eliza Strickland, senior editor for technology news website IEEE Spectrum, interviewed Michael Spranger, president of Sony AI, before an audience at the SXSW session “Ethics in an AI World: Better to be Seen or Unseen?”

Eliza Strickland (left), senior editor for IEEE Spectrum, interviewed Michael Spranger (right), president of Sony AI, on March 13 at a SXSW conference session. (KXAN Photos/Cora Neas)
Eliza Strickland (left), senior editor for IEEE Spectrum, interviewed Michael Spranger (right), president of Sony AI, on March 13 at a SXSW conference session. (KXAN Photos/Cora Neas)

According to Spranger, the terms “seen” and “unseen” were created by Sony Head of AI Ethics Alice Xiang. These terms describe a balance between privacy and inclusion in a dataset used by an AI model. As an example, Strickland mentioned facial recognition software used by law enforcement.

“You may not want your data used in certain datasets,” said Spranger, about the concept of being unseen.

In contrast, to be seen by an AI model means to be included in its dataset. Between being seen and unseen is a third concept of “missing” — if a demographic is underrepresented in a dataset, then real-world harm could emerge, Coleman said, such as a self-driving car not properly detecting and reacting to underrepresented people.

“We want AI to work on everyone properly,” Coleman said. “There is no silver bullet, but there are a number of different mitigation degrees.”

Coleman urged the audience to continue the conversations around AI ethics beyond the conference, and for the AI community to adopt stronger ethical standards.

“I think that this shows all sorts of benefits for humanity, but I think humanity doesn’t trust the technology, particularly for its very benefits,” Coleman said. “It feels almost like we have to restart what we’re doing, and learning about the mistakes of the past, but then also looking forward and moving forward with conviction and with courage.”

For the latest news, weather, sports, and streaming video, head to KXAN Austin.