What’s really, really new? Deep Learning.

Continued from Part 2

Based on excerpts from the new book Cognitive Computing: A Brief Guide for Game Changers

Machines learn on their own? Watch this simple everyday explanation by Demis Hassabis, cofounder of DeepMind.

It may sound like fiction and rather far-fetched, but success has already been achieved in certain areas using deep learning, such as image processing (Facebook’s DeepFace) and voice recognition (IBM’s Watson, Apple’s Siri, Google’s Now and Waze, Microsoft’s Cortana and Azure Machine Learning Platform).

Watch the Guided Tour

Beyond the usual big tech company suspects, newcomers in the field of Deep Learning are emerging: Ersatz Labs, BigML, SkyTree, Digital Reasoning, Saffron Technologies, Palantir Technologies, Wise.io, declara, Expect Labs, BlabPredicts, Skymind, Blix, Cognitive Scale, Compsim’s (KEEL), Kayak, Sentient Technologies, Scaled Inference, Kensho, Nara Logics, Context Relevant, Expect Labs, and Deeplearning4j. Some of these newcomers specialize in using cognitive computing to tap Dark Data, a.k.a. Dusty Data, which is a type of unstructured, untagged and untapped data that is found in data repositories and has not been analyzed or processed. It is similar to big data but differs in how it is mostly neglected by business and IT administrators in terms of its value.

Machine reading capabilities have a lot to do with unlocking “dark” data. Dark data is data that is found in log files and data archives stored within large enterprise class data storage locations. It includes all data objects and types that have yet to be analyzed for any business or competitive intelligence or aid in business decision making. Typically, dark data is complex to analyze and stored in locations where analysis is difficult. The overall process can be costly. It also can include data objects that have not been seized by the enterprise or data that are external to the organization, such as data stored by partners or customers. IDC, a research firm, stated that up to 90 percent of big data is dark.

Cognitive Computing uses hundreds of analytics that provide it with capabilities such as natural language processing, text analysis, and knowledge representation and reasoning to …

  • make sense of huge amounts of complex information in split seconds,
  • rank answers (hypotheses) based on evidence and confidence, and learn from its mistakes.

Pipeline
Watson Deep QA Pipeline (Source: IBM)

The DeepQA technology shown in the chart above, and continuing research underpinning IBM’s Watson is aimed at exploring how advancing and integrating Natural Language Processing (NLP), Information Retrieval (IR), Machine Learning (ML), Knowledge Representation and Reasoning (KR&R) and massively parallel computation can advance the science and application of automatic Question Answering and general natural language understanding.

Cognitive computing systems get better over time as they build knowledge and learn a domain—its language and terminology, its processes and its preferred methods of interacting.

Unlike expert systems of the past that required rules to be hard coded into a system by a human expert, cognitive computing systems can process natural language and unstructured data and learn by experience, much in the same way humans do. As far as huge amounts of complex information (Big Data) is concerned, Virginia “Ginni” Rometty, CEO of IBM stated, “We will look back on this time and look at data as a natural resource that powered the 21st century, just as you look back at hydrocarbons as powering the 19th.”

And, of course, this capability is deployed in the Cloud and made available as a cognitive service, Cognition as a Service (CaaS).

With technologies that respond to voice queries, even those without a smart phone can tap Cognition as a Service. Those with smart phones will no doubt have Cognitive Apps. This means 4.5 billion people can contribute to knowledge and combinatorial innovation, as well as the GPS capabilities of those phones to provide real-time reporting and fully informed decision making: whether for good or evil.

Geoffrey Hinton, the “godfather” of deep learning, and co-inventor of the back propagation and contrastive divergence training algorithms has revolutionized language understanding and language translation. A pretty spectacular December 2012 live demonstration of instant English-to-Chinese voice recognition and translation by Microsoft Research chief Rick Rashid was one of many things made possible by Hinton’s work. Rashid demonstrates a speech recognition breakthrough via machine translation that converts his spoken English words into computer-generated Chinese language. The breakthrough is patterned after deep neural networks and significantly reduces errors in spoken as well as written translation. Watch:

Youtube Video

Peter Fingar
Author: Peter FingarWebsite: http://www.peterfingar.com/

Peter Fingar is an internationally recognized expert on business strategy, globalization and business process management. He's a practitioner with over forty years of hands-on experience at the intersection of business and technology. His seminal book, Business Process Management: The Third Wave is widely recognized as a key launch pad for the BPM trend in the 21st Century.

Peter has held management, technical and advisory positions with GTE Data Services, American Software and Computer Services, Saudi Aramco, EC Cubed, the Technical Resource Connection division of Perot Systems and IBM Global Services.


blog comments powered by Disqus