Your search

In authors or contributors
  • This paper critically tracks the impact of the development of the machine consciousness paradigm from the incredulity of the 1990s to the structuring of the turn of this century, and the consolidation of the present time which forms the basis for guessing what might happen in the future. The underlying question is how this development may have changed our understanding of consciousness and whether an artificial version of the concept contributes to the improvement of computational machinery and robots. The paper includes some suggestions for research that might be profitable and others that may not be.

  • The stated aim of adherents to the paradigm called biologically inspired cognitive architectures (BICA) is to build machines that address "the challenge of creating a real-life computational equivalent of the human mind".(From the mission statement of the new BICA journal.) In contrast, practitioners of machine consciousness (MC) are driven by the observation that these human minds for which one is trying to find equivalents are generally thought to be conscious. (Of course, this is controversial because there is no evidence of consciousness in behavior. But as the hypothesis of the consciousness of others is commonly used, a rejection of it has to be considered just as much as its acceptance.) In this paper, it is asked whether those who would like to build computational equivalents of the human mind can do so while ignoring the role of consciousness in what is called the mind. This is not ignored in the MC paradigm and the consequences, particularly on phenomenological treatments of the mind, are briefly explored. A measure based on a subjective feel for how well a model matches personal experience is introduced. An example is given which illustrates how MC can clarify the double-cognition tenet of Strawson's cognitive phenomenology.

  • Here is examined the work done in many laboratories on the proposition that the mechanisms underlying consciousness in living organisms can be studied using computational theories. This follows an agreement at a 2001 multi-disciplinary meeting of philosophers, neuroscientists and computer scientists that such a research programme was feasible and worthwhile. Here this effort is reviewed both as a historical statement and for the positions held at the time of going to print of this volume. The approaches cover diverse techniques ranging from the machine modeling of neural structures in the brain to abstract models based on the type of logic found in computer programming. Purely theoretical approaches based on hypotheses of what kind of information constitutes a mental state are included.

  • In the period between Turing’s 1950 “Computing Machinery and Intelligence” and the current considerable public exposure to the term “artificial intelligence (AI)”, Turing’s question “Can a machine think?” has become a topic of daily debate in the media, the home, and, indeed, the pub. However, “Can a machine think?” is sliding towards a more controversial issue: “Can a machine be conscious?” Of course, the two issues are linked. It is held here that consciousness is a pre-requisite to thought. In Turing’s imitation game, a conscious human player is replaced by a machine, which, in the first place, is assumed not to be conscious, and which may fool an interlocutor, as consciousness cannot be perceived from an individual’s speech or action. Here, the developing paradigm of machine consciousness is examined and combined with an extant analysis of living consciousness to argue that a conscious machine is feasible, and capable of thinking. The route to this utilizes learning in a “neural state machine”, which brings into play Turing’s view of neural “unorganized” machines. The conclusion is that a machine of the “unorganized” kind could have an artificial form of consciousness that resembles the natural form and that throws some light on its nature.

  • While the recent special issue of JCS on machine consciousness (Volume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published. The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, and include these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness; these individual reviews were then collected together with a minimal amount of editing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Consciousness had to be omitted, but here at last it is. Each paper's review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors!

  • The concept of qualia poses a central problem in the framework of consciousness studies. Despite it being a controversial issue even in the study of human consciousness, we argue that qualia can be complementarily studied using artificial cognitive architectures. In this work we address the problem of defining qualia in the domain of artificial systems, providing a model of “artificial qualia”. Furthermore, we partially apply the proposed model to the generation of visual qualia using the cognitive architecture CERA-CRANIUM, which is modeled after the global workspace theory of consciousness. It is our aim to define, characterize and identify artificial qualia as direct products of a simulated conscious perception process. Simple forms of the apparent motion effect are used as the basis for a preliminary experimental setting focused on the simulation and analysis of synthetic visual experience. In contrast with the study of biological brains, the inspection of the dynamics and transient inner states of the artificial cognitive architecture can be performed effectively, thus enabling the detailed analysis of covert and overt percepts generated by the system when it is confronted with specific visual stimuli. The observed states in the artificial cognitive architecture during the simulation of apparent motion effects are used to discuss the existence of possible analogous mechanisms in human cognition processes.

Last update from database: 3/23/25, 8:36 AM (UTC)