Search
Full bibliography 558 resources
-
I argue here that consciousness can be engineered. The claim that functional consciousness can be engineered has been persuasively put forth in regards to first-person functional consciousness; robots, for instance, can recognize colors, though there is still much debate about details of this sort of consciousness. Such consciousness has now become one of the meanings of the term phenomenal consciousness (e.g., as used by Franklin and Baars). Yet, we extend the argument beyond the tradition of behaviorist or functional reductive views on consciousness that still predominate within cognitive science. If Nagel-Chalmers-Block-style non-reductive naturalism about first-person consciousness (h-consciousness) holds true, then, eventually we should be able to understand how such consciousness operates and how it gets produced (this is not the same as bridging the explanatory gap or solving Chalmers’s hard problem of consciousness). If so, the consciousness it involves can in principle be engineered.
-
This paper argues that conscious attention exists not so much for selecting an immediate action as for focusing learning of the action-selection mechanisms and predictive models on tasks and environmental contingencies likely to affect the conscious agent. It is perfectly possible to build this sort of system into machine intelligence, but it is not strictly necessary unless the intelligence needs to learn and is resource-bounded with respect to the rate of learning vs. the rate of relevant environmental change. Support of this theory is drawn from scientific research and AI simulations, and a few consequences are suggested with respect to self consciousness and ethical obligations to and for AI.
-
We present a two-level model of concurrent communicating systems (CCS) to serve as a basis for machine consciousness. A language implementing threads within logic programming is first introduced. This high-level framework allows for the definition of abstract processes that can be executed on a virtual machine. We then look for a possible grounding of these processes into the brain. Towards this end, we map abstract definitions (including logical expressions representing compiled knowledge) into a variant of the π-calculus. We illustrate this approach through a series of examples extending from a purely reactive behavior to patterns of consciousness.
-
The concept of qualia poses a central problem in the framework of consciousness studies. Despite it being a controversial issue even in the study of human consciousness, we argue that qualia can be complementarily studied using artificial cognitive architectures. In this work we address the problem of defining qualia in the domain of artificial systems, providing a model of “artificial qualia”. Furthermore, we partially apply the proposed model to the generation of visual qualia using the cognitive architecture CERA-CRANIUM, which is modeled after the global workspace theory of consciousness. It is our aim to define, characterize and identify artificial qualia as direct products of a simulated conscious perception process. Simple forms of the apparent motion effect are used as the basis for a preliminary experimental setting focused on the simulation and analysis of synthetic visual experience. In contrast with the study of biological brains, the inspection of the dynamics and transient inner states of the artificial cognitive architecture can be performed effectively, thus enabling the detailed analysis of covert and overt percepts generated by the system when it is confronted with specific visual stimuli. The observed states in the artificial cognitive architecture during the simulation of apparent motion effects are used to discuss the existence of possible analogous mechanisms in human cognition processes.
-
The progress in the machine consciousness research field has to be assessed in terms of the features demonstrated by the new models and implementations currently being designed. In this paper, we focus on the functional aspects of consciousness and propose the application of a revision of ConsScale — a biologically inspired scale for measuring cognitive development in artificial agents — in order to assess the cognitive capabilities of machine consciousness implementations. We argue that the progress in the implementation of consciousness in artificial agents can be assessed by looking at how key cognitive abilities associated to consciousness are integrated within artificial systems. Specifically, we characterize ConsScale as a partially ordered set and propose a particular dependency hierarchy for cognitive skills. Associated to that hierarchy a graphical representation of the cognitive profile of an artificial agent is presented as a helpful analytic tool. The proposed evaluation schema is discussed and applied to a number of significant machine consciousness models and implementations. Finally, the possibility of generating qualia and phenomenological states in machines is discussed in the context of the proposed analysis.
-
Sloman criticizes all existing attempts to define machine consciousness for being overly one-sided. He argues that such definition is not only unattainable but also unnecessary. The critique is well taken in part; yet, whatever his intended aims, by not acknowledging the non-reductive aspects of consciousness, Sloman, in fact, sides with the reductivist view.
-
After discussing a possible contradiction in Sloman's very challenging intervention, I stress the need for not identifying "consciousness" with phenomenal consciousness and with the "qualia" problem. I claim that it is necessary to distinguish different forms and functions of "consciousness" and to explicitly model them, also by exploiting the specific advantage of AI: to make experiments impossible in nature, by separating what cannot be separated in human behavior/mind. As for phenomenal consciousness, one should first be able to model what it means to have a "body" and to "feel" it.
-
Abstract: In the course of seeking an answer to the question “How do you know you are not a zombie?” Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so‐called knowledge game (or “wise‐man puzzle,” or “muddy‐children puzzle”)—one that purportedly ensures that those who pass it are self‐conscious. In this article, on behalf of (at least the logic‐based variety of) AI, I take up the challenge—which is to say, I try to show that this challenge can in fact be met by AI in the foreseeable future.
-
The accurate measurement of the level of consciousness of a creature remains a major scientific challenge, nevertheless a number of new accounts that attempt to address this problem have been proposed recently. In this paper we analyze the principles of these new measures of consciousness along with other classical approaches focusing on their applicability to Machine Consciousness (MC). Furthermore, we propose a set of requirements of what we think a suitable measure for MC should be, discussing the associated theoretical and practical issues. Using the proposed requirements as a framework for the design of an integrative measure of consciousness, we explore the possibility of designing such a measure in the context of current state of the art in consciousness studies.
-
The academic journey to a widely acknowledged Machine Consciousness is anticipated to be an emotional one. Both in terms of the active debate provoked by the subject and a hypothesized need to encapsulate an analogue of emotions in an artificial system in order to progress towards machine consciousness. This paper considers the inspiration that the concepts related to emotion may contribute to cognitive systems when approaching conscious-like behavior. Specifically, emotions can set goals including balancing explore versus exploit, facilitate action in unknown domains and modify existing behaviors, which are explored in cognitive robotics experiments.
-
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set (x)”. Then, equating Q (x) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere.
-
The main sources of inspiration for the design of more engaging synthetic characters are existing psychological models of human cognition. Usually, these models, and the associated Artificial Intelligence (AI) techniques, are based on partial aspects of the real complex systems involved in the generation of human-like behavior. Emotions, planning, learning, user modeling, set shifting, and attention mechanisms are some remarkable examples of features typically considered in isolation within classical AI control models. Artificial cognitive architectures aim at integrating many of these aspects together into effective control systems. However, the design of this sort of architectures is not straightforward. In this paper, we argue that current research efforts in the young field of Machine Consciousness (MC) could contribute to tackle complexity and provide a useful framework for the design of more appealing synthetic characters. This hypothesis is illustrated with the application of a novel consciousness-based cognitive architecture to the development of a First Person Shooter video game character.
-
This paper critically tracks the impact of the development of the machine consciousness paradigm from the incredulity of the 1990s to the structuring of the turn of this century, and the consolidation of the present time which forms the basis for guessing what might happen in the future. The underlying question is how this development may have changed our understanding of consciousness and whether an artificial version of the concept contributes to the improvement of computational machinery and robots. The paper includes some suggestions for research that might be profitable and others that may not be.
-
From the point of view of Cognitive Informatics, consciousness can be considered as a grand integration of a number of cognitive processes. Intuitive definitions of consciousness generally involve perception, emotions, attention, self-recognition, theory of mind, volition, etc. Due to this compositional definition of the term consciousness it is usually difficult to define both what is exactly a conscious being and how consciousness could be implemented in artificial machines. When we look into the most evolved biological examples of conscious beings, like great apes or humans, the vast complexity of observed cognitive interactions in conjunction with the lack of comprehensive understanding of low level neural mechanisms makes the reverse engineering task virtually unreachable. With the aim to effectively address the problem of modeling consciousness at a cognitive level, in this work we propose a concrete developmental path in which key stages in the progressive process of building conscious machines are identified and characterized. Furthermore, a method for calculating a quantitative measure of artificial consciousness is presented. The application of the proposed framework is illustrated with the comparative study of different software agents designed to compete in a first-person shooter video game.
-
Machines can be conscious, is the claim to be adopted here, upon examination of its different versions. We distinguish three kinds of consciousness: functional, phenomenal and hard consciousness (f-, p- and h-consciousness). Robots are functionally conscious already. There is also a clear project in AI on how to make computers phenomenally conscious, though criteria differ. Discussion about p-consciousness is clouded by the lack of clarity on its two versions: (1) First-person functional elements (here, p-consciousness), and (2) Non-functional (h-consciousness). I argue that: (1) There are sufficient reasons to adopt h-consciousness and move forward with discussion of its practical applications; this does not have anti-naturalistic implications. (2) A naturalistic account of h-consciousness should be expected, in principle; in neuroscience we are some way towards formulating such an account. (3) Detailed analysis of the notion of consciousness is needed to clearly distinguish p- and h-consciousness. This refers to the notion of subject that is not an object and complementarity of the subjective and objective perspectives.(4) If we can understand the exact mechanism that produces h-consciousness we should be able to engineer it. (5) H-consciousness is probably not a computational process (more like a liver-function). Machines can, in principle, be functionally, phenomenally and h-conscious; all those processes are naturalistic. This is the engineering thesis on machine consciousness formulated within non-reductive naturalism.
-
This paper reviews the field of artificial intelligence focusing on embodied artificial intelligence. It also considers models of artificial consciousness, agent-based artificial intelligence and the philosophical commentary on artificial intelligence. It concludes that there is almost no consensus nor formalism in the field and that the achievements of the field are meager.
-
This paper briefly describes the most relevant current approaches to the implementation of scientific models of consciousness. Main aspects of scientific theories of consciousness are characterized in sight of their possible mapping into artificial implementations. These implementations are analyzed both theoretically and functionally. Also, a novel pragmatic functional approach to machine consciousness is proposed and discussed. A set of axioms for the presence of consciousness in agents is applied to evaluate and compare the various models.
-
This work aims to describe the application of a novel machine consciousness model to a particular problem of unknown environment exploration. This relatively simple problem is analyzed from the point of view of the possible benefits that cognitive capabilities like attention, environment awareness and emotional learning can offer. The model we have developed integrates these concepts into a situated agent control framework, whose first version is being tested in an advanced robotics simulator. The implementation of the relationships and synergies between the different cognitive functionalities of consciousness in the domain of autonomous robotics is also discussed.
-
Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot’s behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot’s body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent’s organizations with a morphologic control.