Strong Artificial Intelligence and Consciousness

    Providing an alternative view to the theories of weak artificial intelligence is strong artificial intelligence. This approach redefines intelligence to include more than just the ability to solve complex tasks, or simply convince observers that such a quality exists within a system, as per the Turing Test. Strong AI theory rests upon the principle that complex machine systems such as neural networks are capable of establishing connections between different sets of data which were not previously programmed into the system. In other words, the ability to learn. A system which begins and continues to learn, creating and building a knowledge base, it is theorized, increasingly has the ability to exhibit intelligent behavior (Gackenbach, Guthrie, Karpen 1998).

    This is not intelligence as defined by Turing; instead it is the very real ability of a system to solve problems of computation or reasoning based on trial and error. If one method of problem solving does not succeed in producing the desired result, a system with an appropriate number of connections can explore different possibilities, much as a human mind would analyze a problem (Minsky 1982). There has been much debate centered around the question of whether out of such learning would truly emerge an intelligent, much less conscious, network. Many argue that consciousness does not simply arise out of intelligent behavior (Gackenbach et al. 1998). Instead, resting on dualistic principles, self-awareness cannot be duplicated just by arranging the appropriate pieces of a network together and letting them function; there is property which is intangible and thus unable to be "built in."

The answer offered by the strong AI position holds that consciousness is an "emergent [property] of any computational system with sufficient levels of self-modification" (Hunt 1995, 59). That is, consciousness and self-awareness are able to be (or will be in the future) duplicated in any system which demonstrates enough capacity to learn or self-program. This idea that consciousness as we view it, as an awareness of ourselves, as sentience, emotion, and introspection, can be simulated by assembling the appropriate bases is also examined by Penrose (1989). "[All] mental qualities--thinking, feeling, intelligence, understanding, consciousness--are to be regarded, according to this view, merely as aspects of this complicated functioning" of the physical brain. If this is true, that the qualities of thought which set us apart distinctly from machines are simply byproducts of the physiological processes of the brain, then in theory this can be replicated in any system which accurately imitates this biological network.

But how would such complex development come about? Marvin Minsky (1982) offers insight into how to theoretically equip a machine with the resources necessary to become self aware. The critical component in creating an intelligent system, Minsky says, is to give it the capacity to reflect. If given a problem to solve, and if the first attempt fails, a truly intelligent system will not simply try other solutions until the appropriate one is reached. Although this is inherently a process of learning, and if the system has the ability to retain the "memory", all it has really done is reprogram itself. Instead, a reflective machine will attempt to analyze the problem: Instead of mindlessly searching for a workable solution, it will make an effort to understand what the problem really is, and why the first solution failed, much in the way that humans will try and reason their way through complex questions.

Admittedly, such machines are a long way off, for the technology required to allow a machine to perform meaningful self-analyses simply has not yet been developed. Presently, we do not have the means to mimic that most complex of intelligent systems, the human brain. It is our unique ability to build and constantly rewire the synaptic connections of our brain which give us our unique ability to learn and understand (Gackenback et all. 1998). In other words, until we are able to develop a system which can mimic the plasticity of the human body, it is doubtful that machines will become "conscious", in the way we apply the word to ourselves.

There are some people and organizations out there who believe that they have already discovered the secret to artificial intelligence; some have even gone so far as to patent their ideas. The website for Imagination Engines, Inc. (IEI) claims that it has developed a system which is "capable of human-level discovery, invention, and artistic creativity." ( Whether this holds true remains to be seen, as the scientific community has not yet embraced the invention as "the single most important development in history," as IEI also claims. There can be little doubt, however, that eventually we will possess the technology and sophistication to turn ordinary machines into living, conscious beings. Which, of course, will allow us to tackle the next question: Do we really want to?

This project was produced for PSY 380, Social Psychology of Cyberspace, Spring 2000,  at Miami University. All graphics in these pages are used with permission or under fair use guidelines, are in the public domain,  or were created by the authors.  Last revised:  Tuesday, March 11, 2014 at 17:34:09.  This document has been accessed  1  times since 1 May 2000.  Comments & Questions to R. Sherman