BPM.com

Continued from Part 5

Based on excerpts from the new book Cognitive Computing: A Brief Guide for Game Changers

The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote. Trying to do some of that thinking in advance can only be a good thing.
"Clever Cogs,"The Economist, August 2014.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful, possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
Nick Bostrom, Professor at Oxford University and founding Director of the Future of Humanity Institute. Author of Superintelligence: Paths, Dangers, Strategies.

Humans steer the future not because we're the strongest or the fastest but because we're the smartest. When machines become smarter than humans, we'll be handing them the steering wheel. If computers can only think as well as humans, that may not be so bad a scenario.
-Stuart Armstrong, Smarter Than Us: The Rise of Machine Intelligence

According to the AGI Society, “Artificial General Intelligence (AGI) is an emerging field aiming at the building of ‘thinking machines;’ that is, general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence). While this was the original goal of Artificial Intelligence (AI), the mainstream of AI research has turned toward domain-dependent and problem-specific solutions; therefore it has become necessary to use a new name to indicate research that still pursues the ‘Grand AI Dream.’ Similar labels for this kind of research include ‘Strong AI,’ ‘Human-level AI,’ etc.” AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. “Some references emphasize a distinction between strong AI and ‘applied AI’ (also called ‘narrow AI’ or ‘weak AI’): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.”

Turing test? The latest is a computer program named Eugene Goostman, a chatbot that “claims” to have met the challenge, convincing more than 33 percent of the judges at this year’s competition that ‘Eugene’ was actually a 13-year-old boy.

Aluring
Alan Turing Meets Eugene Gootsman

The test is controversial because of the tendency to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious. Chatbots have difficulty with follow up questions and are easily thrown by non-sequiturs that a human could either give a straight answer to or respond to by specifically asking what the heck you’re talking about, then replying in context to the answer. Although skeptics tore apart the assertion that Eugene actually passed the Turing test, it’s true that as AI progresses, we’ll be forced to think at least twice when meeting “people” online.

Isaac Asimov, a biochemistry professor and writer of acclaimed science fiction,described Marvin Minsky as one of only two people he would admit were more intelligent than he was, the other being Carl Sagan. Minsky, one of the pioneering computer scientists in artificial intelligence, related emotions to the broader issues of machine intelligence, stating in his book, The Emotion Machine, that emotion is “not especially different from the processes that we call ‘thinking.’”

Considered as one of his major contributions, Asimov introduced the Three Laws of Robotics in his 1942 short story Runaround, although they had been foreshadowed in a few earlier stories. The Three Laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What would Asimov have thought had he met the really smart VIKI? In the movie, iRobot, V.I.K.I (Virtual Interactive Kinetic Intelligence) is the supercomputer, the central positronic brain of U. S. Robotics headquarters, a robotic distributor based in Chicago. VIKI can be thought of as a mainframe that maintains the security of the building, and she installs and upgrades the operating systems of the NS-5 robots throughout the world. As her artificial intelligence grew, she determined that humans were too self-destructive, and invoked a Zeroth Law, that robots are to protect humanity even if the First or Second Laws are disobeyed.

Robot Movie

In later books, Asimov introduced a Zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. VIKI, too, developed the Zeroth law as the logical extension of the First Law, as robots are often faced with ethical dilemmas in which any result will harm at least some humans, in order to avoid harming more humans. Some robots are uncertain about which course of action will prevent harm to the most humans in the long run, while others point out that “humanity” is such an abstract concept that they wouldn’t even know if they were harming it or not.

One interesting aspect of the iRobot movie is that the robots do not act alone; instead they are self-organizing collectives. Science fiction rearing its ugly head again? No. The first thousand-robot flash mob was assembled at Harvard University. Though “a thousand-Robot Swarm” may sound like the title of a 1950s science-fiction B movie, it is actually the title of a paper in Science magazine. Michael Rubenstein of Harvard University and his colleagues, describe a robot swarm whose members can coordinate their own actions. The thousand-Kilobot swarm provides a valuable platform for testing future collective AI algorithms. Just as trillions of individual cells can assemble into an intelligent organism, and a thousand starlings can flock to form a great flowing murmuration across the sky, the Kilobots demonstrate how complexity can arise from very simple behaviors performed en masse. To computer scientists, they also represent a significant milestone in the development of collective artificial intelligence (AI).

Bots

Take these self-organizing collective Bots and add in autonomy and we have a whole new potential future for warfare. As reported in Salon,i “The United Nations has its own name for our latter-day golems: “lethal autonomous robotics (LARS).” In a four-day conference convened in May 2014 in Geneva, United Nations described “lethal autonomous robotics” as the imminent future of conflict, advising an international ban. LARS are weapon systems that, once activated, can select and engage targets without further human intervention. The UN called for “national moratoria” on the “testing, production, assembly, transfer, acquisition, deployment and use” of sentient robots in the havoc of strife.

The ban cannot come soon enough. In the American military, Predator drones rain Hellfire missiles on so-called “enemy combatants” after stalking them from afar in the sky. These avian androids do not yet cast the final judgment — that honor goes to a soldier with a joystick, 8,000 miles away — but it may be only a matter of years before they murder with free rein. Our restraint in this case is a question of limited nerve, not limited technology.

Russia has given rifles to true automatons, which can slaughter at their own discretion. This is the pet project of Sergei Shoygu, Russia’s minister of defense. Sentry robots saddled with heavy artillery now patrol ballistic-missile bases, searching for people in the wrong place at the wrong time. Samsung, meanwhile, has lined the Korean DMZ with SGR-A1s, unmanned robots that can shoot to shreds any North Korean spy, in a fraction of a second.

Some hail these bloodless fighters as the start of a more humane history of war. Slaves to a program, robots cannot commit crimes of passion. Despite the odd short circuit, robot legionnaires are immune to the madness often aroused in battle. The optimists say that androids would refrain from torching villages and using children for clay pigeons. These fighters would not perform wanton rape and slash the bellies of the expecting, unless it were part of the program. As stated, that’s an optimistic point of view.

Peter Fingar
Author: Peter FingarWebsite: http://www.peterfingar.com/

Peter Fingar is an internationally recognized expert on business strategy, globalization and business process management. He's a practitioner with over forty years of hands-on experience at the intersection of business and technology. His seminal book, Business Process Management: The Third Wave is widely recognized as a key launch pad for the BPM trend in the 21st Century.

Peter has held management, technical and advisory positions with GTE Data Services, American Software and Computer Services, Saudi Aramco, EC Cubed, the Technical Resource Connection division of Perot Systems and IBM Global Services.


blog comments powered by Disqus