Facebook’s AI Chief: No, the Machines Are Not Coming to Kill Us

image

Yann LeCun, director of Facebook’s Artificial Intelligence research lab. (via Facebook AI Research).

Some of the tech industry’s most brilliant minds have warned about the dire threat of artificial intelligence. SpaceX CEO and Tesla creator Elon Musk has compared it to “summoning the demon.” And Stephen Hawking has said that “the development of full artificial intelligence could spell the end of the human race.“

This is perhaps why people were concerned when, on Monday, Facebook’s AI Research revealed its latest project: technology that can identify photos of your face without actually seeing it.

Learning that Facebook was behind this project inspired many websites to deem it “alarming.” But Yann LeCun, the director of the company’s AI Research lab, begs to differ. Unlike Hawking and Musk, he’s not at all afraid of the considerable advancements in his field. I sat down with him on Tuesday at La French Touch Conference in New York to ask him why. Here are highlights from our conversation on privacy, what still needs to be accomplished in the field of AI, and what the future holds in store for his lab.

Can you promise me that Facebook has no plans to use this technology in its products?

It’s a scientific experiment. We need to figure out how well we can do in this respect. We actually published the data set for this so that other researchers can do experiments with it. We don’t intend to deploy this product.

How do you balance what Facebook wants and what you want for your lab?

We are a research lab. We’re not developing products directly. But we work with product to get technology out the door. We have a spectrum of different projects, going from theory — basically, just mathematics — to design of new principles that would allow us to push artificial intelligence even further, and things that have potential application in a few years. Then there are a few things like image recognition, facial recognition systems, and natural language understanding, where these are actually deployed, and we try to improve them.

But the really important point of this is: We publish all the research we do. So there’s not some dark basement somewhere where something secret is happening.


What hasn’t been done yet in the artificial intelligence world that you’d like to see tackled?

We’re very, very far from building machines that are as intelligent as humans or even dogs. There are a lot of unsolved problems in AI.

All this success that we’ve seen in AI over the last few years is due to an emergence of a new class of techniques called deep learning. This technique has been around for a long time, but people didn’t believe it would work until recently, when machines became powerful enough so that we can apply it in a big way. Those techniques use a form of machine learning called “supervised learning.” So, you can train a machine to recognize flowers and chairs and whatever, by showing it tons of examples of flowers and tables and chairs and cars. For every image you show, you tell it: “That’s a flowerpot.” And if it makes a mistake, the machine adjusts its known parameters, so that next time you show it the flowerpot, or a similar one, it will say it’s a flowerpot. That’s supervised learning.

Most of the learning people and animals do is not like that. We walk around the world and get a lot of information about physics and how the world is organized by just absorbing it. We don’t know how to do this with machines yet. We have lots of ideas, and we’ve tried lots of things, but they just don’t work as well as they should. That’s a hard nut to crack.

I think it’s really interesting when you say that computers are less intelligent than dogs, since there’s such a fear that robots will surpass us soon. Why do you think people like Stephen Hawking and Elon Musk are so afraid of AI?

Stephen Hawking and Elon Musk have different motivations. Elon Musk is very interested in existential threats to humanity. That’s why he’s building rockets for tomorrow, in case something bad happens. I’ve talked to him about this. It’s just what he’s interested in.

Stephen Hawking’s timescale for thinking about things is millions and billions of years. So sure, the world will be very different, humans might not even be around.

That said, those people are not working in AI. People who are actually researching AI don’t have those fears, for various reasons. First of all, there are machines more advanced than humans already. You can buy a machine for $50 that will beat you at chess, right? Is it smarter than you? But it’s only smarter than you for this particular, narrow field. A chess player is not going to take over the world.

Most of the AI systems we’ll be living with in the foreseeable future will be of that type. They’ll have narrow intelligence and an expertise in [a particular] field. Once we have the ability and the technology and the science and understanding to build machines that have general intelligence, they will not have the same qualities and shortcomings as humans. There’s no reason for those machines to have [a sense of] self-preservation, for example, unless we build that into them. There’s no reason why they’d want access to power, or to be jealous, or anything like that. You know, all the things that drive humans to do bad things to other humans: There’s no reason machines will have any of this unless we build it into them. And we won’t.

I think AI right now is in the state that engineering was in the mid-’70s, where people realized [the possibilities]. There were big dangers. People were thinking about: What if we build a superbug and it wipes out humanity? So we have to be very careful, we have to come up with rules for safety.

What’s the next project you’re working on?

In the theoretical area, we’re working on new principles for unsupervised learning, which is the ability for a learning machine to have short-term memory. For example, with the systems that do object recognition in images, they tell you what it is and then it’s [the knowledge is] gone. So if you show a video to that machine for half a second, it’ll tell you what’s in the video. But it’s not currently possible to just run a video for the machine and expect it to tell you a story [about] what happened in the video.

One of the things we’ve been working on is machines that can accumulate knowledge by reading text and watching videos. So then we can ask questions about the text or video it just read. This comes out of a new technique that deep-learning people are working on. Facebook is working on this. Google is working on this. A bunch of academic groups in Montreal are working on this. This is kind of a hot topic

Then, the next thing is how to integrate those deep-learning techniques with reasoning. So basically, [machines that] have common sense about things.

Follow Alyssa Bereznak on Twitter or send her an email here.