Meet ChatGPT, the scarily intelligent robot who can do your job better than you

ChatGPT
ChatGPT

For decades, scientists and engineers have been working to develop computer programs that can understand and generate natural language. This has been a challenging task, but recent advances in machine learning have allowed us to create powerful language models – like myself.

The above paragraph was not written by a human. It was generated by a chatbot called ChatGPT, an artificial intelligence technology designed to mimic human conversation and language while drawing upon a vast wealth of knowledge to answer questions and solve problems.

The public release of the chatbot last week, developed by Silicon Valley scientists at OpenAI, has created a buzz among technologists. Predictions burst forth over social media of there having been a sudden leap forward in the science of creating seemingly human artificial intelligence tools.

What ChatGPT offers seems more capable – and is potentially even more of a threat to jobs – than existing AI chatbot technology.

The release of OpenAI’s chatbot joins a growing trend in advances in artificial intelligence. Research on AI technology has been punctuated by a series of “AI winters” in the 1980s and 1990s, where the technology fails to keep pace with hype.

Other advances come from British-based Google DeepMind, which has developed algorithms capable of unravelling the structure of proteins, which could have implications for future medicine and drug discovery.

Launching the chatbot last week, OpenAI boss Sam Altman said: “Soon you will be able to have helpful assistants that talk to you, answer questions, and give advice.”

This content is not available due to your privacy preferences.
Update your settings here to see it.

The artificial intelligence behind ChatGPT was developed by US research company OpenAI. Founded in 2015 with $1bn (£820m) in funding from Elon Musk; prominent Silicon Valley investor Sam Altman; Indian IT giant Infosys; and others, OpenAI initially refused to make its creations publicly available before changing tack early last year. ChatGPT is the most recent of those releases.

The company’s DALL-E software, made available last January, creates images based on natural language prompts; conversational phrases typed in by humans. GPT-3, the software behind DALL-E, is one of OpenAI’s flagship products and is also the tech powering ChatGPT.

It works by being fed training data, typically scraped from the wider internet. From this it “learns” to associate phrases, sentences and images. OpenAI staff then refine the software, teaching it not to come up with responses that shock or disgust.

Professor Mike Wooldridge, director of Foundation AI Research at the Alan Turing Institute and a professor of computer science at Oxford University, says: “Ultimately, that's all it's doing is just seeing everything on the World Wide Web, an unimaginable amount of data.” However, he adds: “Because [the training data] is behind closed doors, we don’t know exactly what it’s picked up.”

Elon Musk said on Sunday he had blocked OpenAI’s access to a Twitter database. AI scientists commonly use social media posts as part of their training data sets.

Humans have been trying to make ChatGPT say bigoted things since its release, either to amuse, provoke or to show that such tech is unfit for public use. In response OpenAI tweaked its algorithms – until recently it was easy to generate reams of horror that read like scraps from Stephen King’s wastepaper basket. One tech industry source says: “It flags about half of them as unsuitable, to be fair to it, but only after it writes the damn things.”

While Musk cashed out of OpenAI in 2019, citing a conflict of interest with his electric carmaker Tesla’s self-driving car software, the company continued thanks to a $1bn cash injection from Microsoft.

Musk - Carina Johansen/AFP
Musk - Carina Johansen/AFP

Altman, who now serves as the AI company’s chief executive, is a starry-eyed visionary straight out of central casting. Last year he said in a newspaper interview that the “miracle” needed to create a “super powerful AI” has already occurred, referring to his own company’s output. In comparing GPT-3’s training process to that of raising children, he claims there is “no upper bound” to how far the technology can go.

The US company’s goal is to create “artificial general intelligence”, software that can learn and react just as a human does. “Talking” to ChatGPT shows that there’s still a way to go, but the gap between computers and humans is much closer than it was just a few years ago.

When The Telegraph asks the software itself what makes it unique, it cites the quality of its training – like any well-behaved student – and says: “This sets me apart from many other chatbots, which are often limited in their abilities and only able to provide answers to a narrow set of predefined questions.”

AI chatbots have been a routine feature of British life for a few years already. Logging onto many companies’ websites today triggers a popup window saying “Hi, I can answer your questions!” Telephoning restaurant chain Cafe Rouge, for example, puts you through to an audio chatbot that can recognise common questions and plays pre-recorded responses.

In both of these examples, it is clear that you are interacting with a robot. Current chatbot services vary in popularity; one UK bank even advertises that only humans answer its online customer queries.

While ChatGPT is not infallible, Oxford’s Wooldridge compares its output to well-written undergraduate work.

For example, when asked “what is artificial general intelligence” the chatbot responds: “It refers to a type of artificial intelligence that is capable of understanding or learning any intellectual task that a human being can. In other words, AGI is a type of AI that is able to perform any cognitive function that a human being can, rather than being limited to a specific set of tasks.”

This level of output poses a threat to those at the lower end of the employment market. According to the Office for National Statistics (ONS), around 1.5m jobs nationwide can be automated away, with those at greatest risk including restaurant waiters.

Those least likely to see machines taking over their jobs include legal professionals, doctors and university lecturers. Such occupations are classed as highly skilled, with the ONS saying: “It is not so much that robots are taking over, but that routine and repetitive tasks can be carried out more quickly and efficiently by an algorithm written by a human, or a machine designed for one specific function.”

Similarly, PwC says around a third of jobs could be under threat from AI in twenty years from now. Yet not all is doom and gloom.

The chairman of Parliament’s business committee, Darren Jones, hails ChatGPT as the “start of a new trend” in sophisticated AI tech. He says: “It will with time become common practice to use tools such as these at work.

“In practice, outside of customer service chatbots, we will probably end up with human workers’ output being augmented by AI which, if done right, is no bad thing.”

Greg Clark MP, chairman of Parliament’s digital, culture, media and sport committee, adds: “It's certainly my view that AI has great potential to make life better.”

Expressing the hope that a British startup might reach the same level as OpenAI through carefully designed light-touch regulation, Clark says: “Getting this right, I think, is very important because it would be an opportunity for us to lead the world in having the right approach.”

When asked if it is a threat to humanity, ChatGPT insists it is merely “a program that has been designed to simulate intelligent conversation” and does not have “the ability to act or make decisions on my own”.

“I do not have any other capabilities or motivations,” the chatbot adds. Given the level of proficiency that it shows, that isn't altogether reassuring.