2003 Report on “Synthetic Humans;” Sandia National Laboratories

MACHINE THINKS, THEREFORE IT IS, August 27, 2003

Reader’s advisory: Wired News has been unable to confirm some sources for a number of stories written by this author. If you have any information about sources cited in this article, please send an e-mail to sourceinfo[AT]wired.com.

A new type of thinking machine that could completely change how people interact with computers is being developed at the Department of Energy’s Sandia National Laboratories.

Over the past five years, a team led by Sandia cognitive psychologist Chris Forsythe has been working on creating intelligent machines: computers that can accurately infer intent, remember prior experiences with users, and allow users to call upon simulated experts to help them analyze problems and make decisions.

Forsythe’s team was originally trying to create a “synthetic human” – software capable of thinking like a person – for use in national defense.

The thinking software was to create profiles of specific political leaders or entire populations. Once programmed, the synthetic human(s) could, along with analytic tools, predict potential responses to various hypothetical situations.

But along the way, the experiment took a different turn.

Forsythe needed help with the software, and asked some of the programmers in Sandia’s robot lab for assistance. The robotics researchers immediately saw that the technology could be used to develop intelligent machines, and the research’s focus quickly morphed from creating computerized people to creating computers that can help people by acting more like them.

Synthetic humans are still a big part of the Sandia cognitive machines project, but researchers have now extended their idea of what the technology can and will ultimately be used for.

“We would like to advance the field of simulation, and particularly simulations involving synthetic humans, to the point that it becomes a practical tool that can be used by anyone to answer a wide range of everyday questions,” said Forsythe.

But fear not – this is not a new incarnation of Clippy the paperclip, Microsoft’s much maligned “helper application.”

“Clippy is a wonderful example of what not to do,” said Forsythe. “Actually, most forms of online help are good examples of what not to do.”

When two humans interact, two (hopefully) cognitive entities are communicating. As cognitive entities – thinking beings – each has some sense of what the other knows and does not know. They may have shared past experiences that they can use to put current events in context; they might recognize each other’s particular sensitivities.

In contrast, Forsythe said, Clippy illustrates a flawed one-size-fits-all, lowest-common-denominator approach.

Forsythe and his team are trying to mimic real human interaction, embedding within computers an extremely human-like cognitive model that enables the machine to have an interaction with the user that more closely resembles communications between two thinking humans.

“If you had an aide tasked with watching everything you do, learning everything they could about you and helping you in whatever way they could, it is extremely unlikely that your interactions with that aide would in any way resemble interactions with Clippy,” Forsythe said.

Forsythe believes the technology his team is developing will eventually be ubiquitous and allow almost anyone to quickly configure and execute relatively complex computer simulations.

“For instance, sitting in my car at a red light, I should be able to set up and run a simulation that shows me possible effects on traffic of the accident that is ahead of me,” Forsythe said.

“Such a tool would not necessarily tell me the answer, but it would augment my own cognitive processes by making me aware of potential realities, as well as the interrelationships between various factors that I may or may not be able to control, influence or avoid.”

Computer software often, but not exclusively, relies on programmed rules. If “A” happens, then so does “B.” Humans are a bit more complex. Stress, fatigue, anger, hunger, joy and differing levels of ability can change how humans respond to any given stimulus.

“Humans are certainly capable of logical operations, but there is much more to human cognition,” said Forsythe.

“We’ve focused on replicating the processes whereby an individual applies their unique knowledge to interpret ongoing situations or events. This is a pattern recognition process that involves episodic memory and emotional processes but not much of what one would typically consider logical operations.”

Sandia’s work on cognitive machines took off in 2002 with funding from the Defense Advanced Research Projects Agency to develop a real-time machine that could figure out what its user is thinking.

This capability would provide the potential for systems capable of augmenting the mental abilities of their users through “discrepancy detection,” in which a machine uses an operator’s cognitive model – what the machine knows about its user – to monitor its own state.

When evidence appears of a discrepancy between what’s happening or what is being done on or to the machine and the operator’s assumed perceptions or typical behavior, a discrepancy alert may be signaled.

The idea is to figure out ways to make humans smarter by improving human-hardware interactions, said John Wagner, manager of Sandia’s Computational Initiatives Department.

Early this year work began on Sandia’s Next Generation Intelligent Systems Grand Challenge project. The goal of Grand Challenge is to significantly improve the human capability to understand and solve national security problems, given the exponential growth of information and very complex environments, said Larry Ellis, the principal investigator.

Forsythe believes that cognitive machine technology will be embedded in most computer systems within the next 10 years. His team has completed trial runs of technology methodologies that allow the knowledge of a specific expert to be captured in computer models.

They’ve also worked out methods to provide synthetic humans with episodic memory (memory of experiences) so that computers might apply their knowledge of specific experiences to solving problems in a manner that closely parallels what people do on a regular basis.

“I can think of no better use of available CPU cycles than to have the machine learn and adapt to the individual user,” said Forsythe. “It’s the old issue of homogeneity vs. heterogeneity.”

“Throughout the history of the computer industry, the tendency has been to force users toward a homogeneous model, instead of acknowledging and embracing the individual variability that users bring to computing environments.”

II. August 13, 2003

Machines accurately infer user intent, remember experiences and allow users to call upon simulated experts

Sandia team develops cognitive machines

Sandia software
SANDIA SOFTWARE DEVELOPER Rob Abbott operates the DDD-AWACS simulation trainer while a cognitive model of the software runs simultaneously. The cognitive model can detect when Rob makes an error and alert him to it. (Photo by Randy Montoya)
Download 300dpi JPEG image, ‘cognition.jpg’, 1.1MB (Media are welcome to download/publish this image with related news stories.)

ALBUQUERQUE, N.M. — A new type of “smart” machine that could fundamentally change how people interact with computers is on the not-too-distant horizon at the Department of Energy’s Sandia National Laboratories.

Over the past five years a team led by Sandia cognitive psychologist Chris Forsythe has been developing cognitive machines that accurately infer user intent, remember experiences with users and allow users to call upon simulated experts to help them analyze situations and make decisions.

“In the long term, the benefits from this effort are expected to include augmenting human effectiveness and embedding these cognitive models into systems like robots and vehicles for better human-hardware interactions,” says John Wagner, manager of Sandia’s Computational Initiatives Department. “We expect to be able to model, simulate and analyze humans and societies of humans for Department of Energy, military and national security applications.”

Synthetic human
The initial goal of the work was to create a “synthetic human” — software program/computer — that could think like a person.

“We had the massive computers that could compute the large amounts of data, but software that could realistically model how people think and make decisions was missing,” Forsythe says.

There were two significant problems with modeling software. First, the software did not relate to how people actually make decisions. It followed logical processes, something people don’t necessarily do. People make decisions based, in part, on experiences and associative knowledge. In addition, software models of human cognition did not take into account organic factors such as emotions, stress, and fatigue — vital to realistically simulating human thought processes.

In an early project Forsythe developed the framework for a computer program that had both cognition and organic factors, all in the effort to create a “synthetic human.”

Follow-on projects developed methodologies that allowed the knowledge of a specific expert to be captured in the computer models and provided synthetic humans with episodic memory — memory of experiences — so they might apply their knowledge of specific experiences to solving problems in a manner that closely parallels what people do on a regular basis.

Strange twist
Forsythe says a strange twist occurred along the way.

“I needed help with the software,” Forsythe says. “I turned to some folks in Robotics, bringing to their attention that we were developing computer models of human cognition.”

The robotics researchers immediately saw that the model could be used for intelligent machines, and the whole program emphasis changed. Suddenly the team was working on cognitive machines, not just synthetic humans.

Work on cognitive machines took off in 2002 with funding from the Defense Advanced Research Projects Agency (DARPA) to develop a real-time machine that can infer an operator’s cognitive processes. This capability provides the potential for systems that augment the cognitive capacities of an operator through “Discrepancy Detection.” In Discrepancy Detection, the machine uses an operator’s cognitive model to monitor its own state and when there is evidence of a discrepancy between the actual state of the machine and the operator’s perceptions or behavior, a discrepancy may be signaled.

Early this year work began on Sandia’s Next Generation Intelligent Systems Grand Challenge project. “The goal of this Grand Challenge is to significantly improve the human capability to understand and solve national security problems, given the exponential growth of information and very complex environments,” says Larry Ellis, the principal investigator. “We are integrating extraordinary perceptive techniques with cognitive systems to augment the capacity of analysts, engineers, war fighters, critical decision makers, scientists and others in crucial jobs to detect and interpret meaningful patterns based on large volumes of data derived from diverse sources.”

“Overall, these projects are developing technology to fundamentally change the nature of human-machine interactions,” Forsythe says. “Our approach is to embed within the machine a highly realistic computer model of the cognitive processes that underlie human situation awareness and naturalistic decision making. Systems using this technology are tailored to a specific user, including the user’s unique knowledge and understanding of the task.”

The idea borrows from a very successful analogue. When people interact with one another, they modify what they say and don’t say with regard to such things as what the person knows or doesn’t know, shared experiences and known sensitivities. The goal is to give machines highly realistic models of the same cognitive processes so that human-machine interactions have essential characteristics of human-human interactions.

“It’s entirely possible that these cognitive machines could be incorporated into most computer systems produced within 10 years,” Forsythe says.

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major research and development responsibilities in national security, energy and environmental technologies, and economic competitiveness.

Leave a Reply

Your email address will not be published. Required fields are marked *