Based on excerpts from the new book Cognitive Computing: A Brief Guide for Game Changers
“I think there is a world market for about five computers.”
—remark attributed to Thomas J. Watson (Chairman of the Board of IBM), 1943.
Let’s explore the world of computer hardware that is relevant to cognitive computing and tapping the vast amounts of Big Data being generated by the Internet of Everything. Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images and sounds. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same.
Source: MIT Technology Review
For the past half-century, most computers run on what’s known as von Neumann architecture. In a von Neumann system, the processing of information and the storage of information are kept separate. Data travels to and from the processor and memory—but the computer can’t process and store at the same time. By the nature of the architecture, it’s a linear process, and ultimately leads to the von Neuman “bottleneck.”
To see what’s happening to break the von Neuman bottleneck, let’s turn to Wikipedia for a quick introduction to cognitive computers. “A cognitive computer is a proposed computational device with a non-Von Neumann architecture that implements learning using Hebbian theory. Hebbian theory is a theory in neuroscience that proposes an explanation for the adaptation of neurons in the brain during the learning process. From the point of view of artificial neurons and artificial neural networks, Hebb’s principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously—and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.
“Instead of being programmable in a traditional sense within machine language or a higher level programming language such a device learns by inputting instances through an input device that are aggregated within a computational convolution or neural network architecture consisting of weights within a parallel memory system. An early example of such a device has come from the Darpa SyNAPSE program. SyNAPSE is a backronym standing for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The name alludes to synapses, the junctions between biological neurons. The program is being undertaken by HRL Laboratories (HRL), Hewlett-Packard, and IBM Research. Announced in 2008, DARPA’s SyNAPSE program calls for developing electronic neuromorphic (brain-simulation) machine technology.”
In August 2014, IBM announced TrueNorth, a brain-inspired computer architecture powered by an unprecedented 1 million neurons and 256 million synapses. It is the largest chip IBM has ever built at 5.4 billion transistors, and has an on-chip network of 4,096 neurosynaptic cores. Yet, it only consumes 70MW during real-time operation —orders of magnitude less energy than traditional chips.
IBM hopes to find ways to scale and shrink silicon chips to make them more efficient, and research new materials to use in making chips, such as carbon nanotubes, which are more stable than silicon and are also heat resistant and can provide faster connections.
Meanwhile, SpiNNaker (Spiking Neural Network Architecture) is a computer architecture designed by the Advanced Processor Technologies Research Group (APT) at the School of Computer Science, University of Manchester, led by Steve Furber, to simulate the human brain. It uses ARM processors in a massively parallel computing platform, based on a six-layer thalamocortical model developed by Eugene Izhikevich. SpiNNaker is being used as the Neuromorphic Computing Platform for the Human Brain Project.
And, The BrainScaleS project, a European consortium of 13 research groups is led by a team at Heidelberg University, Germany. The project aims to understand information processing in the brain at different scales ranging from individual neurons to whole functional brain areas. The research involves three approaches: (1) in vivo biological experimentation; (2) simulation on petascale supercomputers; (3) the construction of neuromorphic processors. The goal is to extract generic theoretical principles of brain function and to use this knowledge to build artificial cognitive systems. Each 20-cm-diameter silicon wafer in the system contains 384 chips, each of which implements 128,000 synapses and up to 512 spiking neurons. This gives a total of around 200,000 neurons and 49 million synapses per wafer. This allows the emulated neural networks to evolve tens-of-thousands times quicker than real time.
In 2014, EMOSHAPE (www.emoshape.com) announced the launch of a major technology breakthrough with an EPU (emotional processing unit).
Thus, cognitive computers in the future may contain CPUs, GPUs, NPUs, EPUs and Quantum Processing Units (QPUs)!