Designing a good artificial intelligence is hard. For a company like Google, which relies heavily on AI, designing the best possible AI software is crucial. And who better to design an AI than another AI?
If you said "literally anyone else" you might be right, but folks at Google's AI research lab, Google Brain, would disagree. The lab is reportedly building AI software that can build more AI software, with the goal of making future AI cheaper and easier.
[contentlinks align="left" textonly="false" numbered="false" headline="Related%20Stories" customtitles="Google%20Is%20Working%20on%20Plans%20to%20Prevent%20Skynet%7CGoogle%20AIs%20Learn%20to%20Communicate%20In%20Secret%20Code%7CDon't%20Fear%20Skynet:%20Robot%20Brains%20Just%20Want%20to%20Love" customimages="||" content="article.21154|article.23598|article.18066"]
Currently, building a powerful AI is hard work. It takes time to carefully train AIs using machine-learning, and money to hire experts who know the tools required to do it. Google Brain's ultimate aim is to reduce these costs and make artificial intelligence more accessible and efficient. If a university or corporation looking to build an AI of their own could simply rent an AI builder instead of hiring a team of experts, it would lower the cost and increase the number of AIs, spreading the benefits of the technology far and wide.
What's more, using AIs to build more AIs may also increase the speed at which new AIs can be made. Currently, AIs can require weeks or months to learn how to do tasks by using unfathomably large amounts of computing power to try things over and over again, quite literally starting with no understanding of anything they're doing. AI trainers might find ways to optimize that process that no human could hope to discover.
The downside is that AI building more AIs sure seems like it's inviting a runaway cascade and, eventually, Skynet. After all, one of the things that makes machine-learning AIs so effective is that they learn in ways that are completely unlike-and often completely opaque to-humans. Once you've trained an AI to accomplish a certain goal, you can't necessarily crack it open and see how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained.
For now, Google says its AI maker is not advanced enough yet to compete with human engineers. However, given the rapid pace of AI development, it may only be a few years before that is no longer true. Hopefully it doesn't happen before we are ready.
Source: MIT Technology Review
You Might Also Like