Even the Creator of ChatGPT Finds AI Scary, But Not Everyone Agrees
Regulation may be the key to keeping AI safe
New AI-guided chatbots are providing answers that scare some users.
Some experts say that AI could be dangerous.
One way to keep AI in check is through government regulations.
Devrimb / Getty Images
New chatbots guided by artificial intelligence (AI) are creeping users out; apparently, even the technology's creator feels the same way.
Sam Altman, the CEO of ChatGPT creator OpenAI, claimed recently that the world might not be "that far from potentially scary" artificial intelligence. Many observers agree with the assessment and say regulating the technology will be critical.
"AI-based chatbots have the potential to be a tremendously valuable tool for businesses, organizations, and individuals if used responsibly and ethically," Matan Cohen, the director of product at Rakuten Viber, a secure messaging app, told Lifewire in an email interview. "The applications range from enhancing customer service, automating tedious tasks, to even assisting with mental health. However, we need to be mindful of the risks of AI, from spreading misinformation and lacking accountability, all the way to manipulating users in order to extract personal information or promote fake news and propaganda."
Scary Monsters?
Altman Tweeted recently that moving to an AI-enabled future is "mostly good" and can happen fast—like the transition from "pre-smartphone world to post-smartphone world." He pointed to one challenge involving AI chatbots which have unnerved some users with their human-like and sometimes creepy responses.
The dangers of AI may be growing. Cohen predicted that in the next year, we would see more AI-based applications emerging in our daily lives and offering great value and risk.
"While the average user may not yet be fully exposed to these risks, the more AI-based services become a part of our lives, the greater the risk will be," he said. "Experts are warning us now before it's too late, and given the pace of AI improvements, that day is not far away."
The Arguments for Regulating AI
Regulation may be the key to curbing dangerous AI. Cohen said AI abilities, particularly AI chatbots, should be regulated and monitored similarly to Europe's GDPR (General Data Protection Regulations), protecting users' privacy. He said that companies developing AI should be required to add built-in limitations to protect fundamental human rights and values.
Donato Fasano / Getty Images
"It appears that, in this case, the big tech companies pushing AI developments are aware of the dangers and their responsibility to prioritize the privacy and safety of users (although this wasn't always the case)," he added. "However, as AI becomes more of a 'commodity,' humanity cannot rely solely on the good faith of companies and individuals. This arena should be heavily regulated to ensure that AI is developed and used ethically and responsibly."
Lindsey Zuloaga, the chief data scientist at HireVue, who manages a team that builds and validates machine learning algorithms, said in an email to Lifewire that her primary concern with AI is the proliferation of disinformation.
"Frankly, innovation has outpaced safeguards, and it's important that researchers and technologists are asking critical questions and rapidly trying to build in safeguards," Zuloaga added.
Zuloaga said that all generative AI tools should undergo rigorous third-party algorithmic audits and the results made public. Also, creators of new AI tools should prioritize creating AI Explainability Statements, a process that documents to the public, customers, and users how a given technology is developed and tested.
Some experts say the fear of AI is overhyped. Matthew Kershaw, the vice president of strategy at D-ID, which makes generative AI, said in an email interview that some users might be inflating the fear of AI and misplacing it.
"Any technology is only as good or bad as the use it is put to," he said. "We would urge those who are worried to advocate for clearer regulations and responsible business practices rather than fear the tech itself, which has immense potential for good."
Dan Whiteley, the chief technology officer at Jugo, said in an email to Lifewire that the media is overhyping the dangers of AI to create headlines. "There will be exceptions in my view where a chatbot is engineered or trained for fraudulent activity or to deliver malware, etc., but we need to ensure we have the right tools for protection," he added.