Will IBM s Watson Usher in a New Era of Cognitive Computing?

Computers as we know them have are close to reaching an inflection point—the next generation is in sight but not quite within our grasp. The trusty programmable machines that have proliferated since the 1940s will someday give way to cognitive systems that draw inferences from data in a way that mimics the human brain.


IBM treated the world to an early look at cognitive computing in February 2011, when the company pitted its Watson computing system against two former champions on TV’s Jeopardy! quiz show. Watson’s ability to search a factual database in response to questions, to determine a confidence level and, based on that level, to buzz in ahead of competitors led it to a convincing victory. The accomplishment will soon seem quaint when compared with next-generation cognitive systems. IBM has already increased Watson’s speed, shrunk its previously room-size dimensions and hooked it to vastly more data.


Last month two prominent medical facilities began testing the revamped Watson as a tool for data analysis and physician training. The University of Texas M. D. Anderson Cancer Center is using Watson to help doctors match patients with clinical trials, observe and fine-tune treatment plans, and assess risks as part of M. D. Anderson’s "Moon Shots" mission to eliminate cancer. Meanwhile, physicians at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University are hoping IBM’s technology can help them make more informed and accurate decisions faster, based on electronic medical records.


It’s a nice start, but still only the beginning, says John Kelly, an IBM senior vice president and director of IBM Research, one of the world’s largest commercial research organizations. In his recent book, Smart Machines: IBM’s Watson and the Era of Cognitive Computing, co-authored with IBM communications strategist Steve Hamm, Kelly notes that one important goal for cognitive computers, and for Watson in particular, is to help make sense of big data—the mountains of text, images, numbers and voice files that strain the limits of current computing technologies.


Scientific American spoke with Kelly about cognitive computing’s relative immaturity, its growing pains and how efforts to develop so-called neuromorphic computers, which mimic the human brain even more closely, could someday relegate Watson to little more than an early 21st-century artifact.


[An edited transcript of the interview follows.]



When did you first see the stirrings of big data as a trend?

About four years ago we started to see an explosion of data. Previously, we had focused on the explosion of devices and connected technologies that make up the Internet of things.


How is big data different from data mining and analysis that companies have been doing for years to improve customer service and logistics?

It’s different in a number of ways: First is the scale, which is so large that traditional databases and query systems can’t deal with it—“big” is an understatement. Second, so much of the data is unstructured, coming from things like texts or tweets or signals from some connected device. It’s also flowing at incredible speeds. Finally, with big data we’re not always looking for precise answers, we’re looking for information that will help us make decisions.


It seems that emerging cognitive computing technologies will play an important role in making big data usable. How is cognitive computing different from the systems we’ve been using for decades?

Cognitive systems were conceived of and are being built to deal with this new immense amount of information. They have a number of characteristics different from today’s computers. One is that they learn patterns and trends. They no longer require reprogramming by humans for all the tasks we want them to do. Secondly, cognitive systems interact with people in a much more natural way. They understand our human language, they recognize our behaviors and they fit more seamlessly into our work–life balance. We can talk to them, they will understand our mannerisms, our behaviors—and that will shift dramatically how humans and computers interact.


If you consider a timeline of programmable computers that goes from ENIAC in the 1940s and 1950s to today’s smartphones, where is cognitive computing in terms of maturity?

We’re just at the beginning of this cognitive computing era. Think about the IBM System/360 mainframe in the 1960s or the first PC in the early 80s. The very first cognitive system, I would say, is the Watson computer that competed on Jeopardy! But to make Watson, we pulled together pieces from the last era. Going forward the underlying technology will change. The system will become much more interactive—it won’t respond to just question and answer, it will respond to complex dialogue.


How has IBM adapted the Jeopardy! version of Watson to help cancer researchers at M. D. Anderson?

Watson’s unveiling was on Jeopardy!, and it was quite an impressive performance. In the subsequent two years Watson’s technology has become dramatically faster and smaller than that first system. To work with doctors and medical schools across a number of different locations, Watson has ingested a large portion of the world’s medical information, whether it’s in journals, health records or doctors’ notes. The system is in the final stages of learning the details of cancer, for example, as M. D. Anderson is particularly interested in leukemia. Over the past six to 12 months Watson has been working with oncologists in this area to help speed certain discoveries related to leukemia.


How is Watson learning about cancer?

Watson can electronically ingest the raw information. Then Watson has to be trained. It’s presented with complex health care problems where the treatment and outcome are known. So you literally have Watson try to determine the best diagnosis or therapy. And then you look to see whether that was the proper outcome. You do this several times, and the learning engines in Watson begin to make connections between pieces of information. The system learns patterns, it learns outcomes, it learns what sources to trust. Some journals, some doctors, are more accurate than others. Watson learns all of this.


The Cleveland Clinic’s projects will give doctors and medical students access to Watson’s reasoning process so they can see how the computer reaches its conclusions. What is the rationale for that feature?

Cleveland Clinic Lerner College of Medicine has put Watson in a team environment with aspiring med students, and they use Watson to explore complex medical situations. In that way the students are able to learn and explore the data that Watson has access to. And very interestingly, in reverse, the students are also teaching Watson. As Watson hits a bump in its learning, often a student can teach Watson the meaning of a word, perhaps allowing Watson to have a breakthrough in knowledge. We’re starting to see both humans and machines working together to be better than just the sum of the parts.


Is having access to the reasoning process also important for double-checking Watson’s logic?

Anyplace a complex decision needs to be made by a human, it’s very important to be able to explore how Watson reached its conclusion. In health care doctors want to understand what sources Watson consulted in the literature and what connections did it make between, say, a medical test and a therapy—Where did that information come from, and why did Watson conclude that?—because it might be a surprise. In other areas such as drug discovery, same thing—the biologist or chemist wants to understand why Watson concluded that such-and-such a molecule would be the next big drug. They want to explore the data before making a decision.


How does the latest Watson compare to its game-show predecessor?

It’s roughly three times as fast and about a quarter the size of the original system. Some of that is through hardware optimization but a lot of it is also tuning the underlying machine-learning algorithms to make them much more efficient. That lets us run Watson on a much smaller, yet more powerful, system than we did two years ago.


What will the personal versions of cognitive systems look like?

There are a couple of ways cognitive systems will interact on a personal level. First of all, Watson will be available on a cloud service, so any end-point device—whether it’s your personal [digital] assistant, your glasses, your watch, whatever—will have access to Watson. These cognitive systems will also be important in networks of people. In social media one could envision that a cognitive system is just another node in your personal network that enters into discussions between you and other people, and offers its opinion on certain issues based on available information.


Is it possible to shrink such cognitive systems down a size that you could wear or carry around?

Watson has shrunk considerably in two years. Fast-forward another few years—the power of Watson that played on Jeopardy! just two years ago and filled a room, you can literally have in your pocket in the not-too-distant future. But also, because it’s cloud accessible, anyplace you’re connected you will be able to connect to Watson through any device. You just need to be able to reach it electronically to get all of the power of Watson.


At a high level, what is the difference between Watson’s cognitive computing capabilities and those of the “neuromorphic” systems under development to mimic the human brain?

Cognitive systems at the beginning of this era of computing were about the size of a room. The Jeopardy! version of Watson utilized 85,000 watts of power to compete with two humans. The human brain uses about 20 watts of power. It took thousands of times more energy for that computer to win just that game versus a human brain. Synaptic or neuromorphic systems are basically trying to mimic the performance and the low power of the brain. We’re exploring what that means with very low-power technologies and very high-density network devices to understand whether we can build something that performs more like a brain than today’s hardwired computers.


If we can do that, then we can distribute [mini] cognitive computers everywhere in the world, versus having big, high-power Watson systems available through the cloud. We view it as the possible future of cognitive systems.

Follow Scientific American on Twitter @SciAm and @SciamBlogs. Visit ScientificAmerican.com for the latest in science, health and technology news.
© 2014 ScientificAmerican.com. All rights reserved.