Search
Full bibliography 558 resources
-
When people speak about consciousness, they distinguish various types and different levels, and they argue for different concepts of cognition. This complicates the discussion about artificial or machine consciousness. Here we take a bottom-up approach to this question by presenting a family of robot experiments that invite us to think about consciousness in the context of artificial agents. The experiments are based on a computational model of sensorimotor contingencies. It has been suggested that these regularities in the sensorimotor flow of an agent can explain raw feels and perceptual consciousness in biological agents. We discuss the validity of the model with respect to sensorimotor contingency theory and consider whether a robot that is controlled by knowledge of its sensorimotor contingencies could have any form of consciousness. We propose that consciousness does not require higher-order thought or higher-order representations. Rather, we argue that consciousness starts when (i) an agent actively (endogenously triggered) uses its knowledge of sensorimotor contingencies to issue predictions and (ii) when it deploys this capability to structure subsequent action.
-
Traditional approaches model consciousness as the outcome either of internal computational processes or of cognitive structures. We advance an alternative hypothesis – consciousness is the hallmark of a fundamental way to organise causal interactions between an agent and its environment. Thus consciousness is not a special property or an addition to the cognitive processes, but rather the way in which the causal structure of the body of the agent is causally entangled with a world of physical causes. The advantage of this hypothesis is that it suggests how to exploit causal coupling to envisage tentative guidelines for designing conscious artificial agents. In this paper, we outline the key characteristics of these causal building blocks and then a set of standard technologies that may take advantage of such an approach. Consciousness is modelled as a kind of cognitive middle ground and experience is not an internal by-product of cognitive processes but the external world that is carved out by means of causal interaction. Thus, consciousness is not the penthouse on top of a 50 stores cognitive skyscraper, but the way in which the steel girders snap together from bottom to top.
-
In this work, we present a distributed cognitive architecture used to control the traffic in an urban network. This architecture relies on a machine consciousness approach – Global Workspace Theory – in order to use competition and broadcast, allowing a group of local traffic controllers to interact, resulting in a better group performance. The main idea is that the local controllers usually perform a purely reactive behavior, defining the times of red and green lights, according just to local information. These local controllers compete in order to define which of them is experiencing the most critical traffic situation. The controller in the worst condition gains access to the global workspace, further broadcasting its condition (and its location) to all other controllers, asking for their help in dealing with its situation. This call from the controller accessing the global workspace will cause an interference in the reactive local behavior, for those local controllers with some chance in helping the controller in a critical condition, by containing traffic in its direction. This group behavior, coordinated by the global workspace strategy, turns the once reactive behavior into a kind of deliberative one. We show that this strategy is capable of improving the overall mean travel time of vehicles flowing through the urban network. A consistent gain in performance with the “Artificial Consciousness” traffic signal controller during all simulation time, throughout different simulated scenarios, could be observed, ranging from around 13.8% to more than 21%.
-
Consciousness is not only a philosophical but also a technological issue, since a conscious agent has evolutionary advantages. Thus, to replicate a biological level of intelligence in a machine, concepts of machine consciousness have to be considered. The widespread internalistic assumption that humans do not experience the world as it is, but through an internal ‘3D virtual reality model’, hinders this construction. To overcome this obstacle for machine consciousness a new theoretical approach to consciousness is sketched between internalism and externalism to address the gap between experience and physical world. The ‘internal interpreter concept’ is replaced by a ‘key-lock approach’. Here, consciousness is not an image of the external world but the world itself. A possible technological design for a conscious machine is drafted taking advantage of an architecture exploiting selfdevelopment of new goals, intrinsic motivation, and situated cognition. The proposed cognitive architecture does not pretend to be conclusive or experimentally satisfying but rather forms the theoretical the first step to a full architecture model on which the authors currently work on, which will enable conscious agents e.g. for robotics or software applications.
-
The problem of consciousness is one of the mostimportant problems in science as well as in philosophy. Thereare different philosophers and different scientists who define itand explain it differently. As far as our knowledge ofconsciousness is concerned, ‘consciousness’ does not admit of adefinition in terms of genus and differentia or necessary andsufficient condition. In this paper I shall explore the very idea ofmachine consciousness. The machine consciousness has offeredcausal explanation to the ‘how’ and ‘what’ of consciousness, butthey failed to explain the ‘why’ of consciousness. Theirexplanation is based on the ground that consciousness is causallydependent on the material universe and that of all, consciousnessphenomena can be explained by mapping the physical universe.Again, this mechanical/epistemological theory of consciousnessis essentially committed to scientific world view, which cannotavoid metaphysical implication of consciousness.
-
Kevin O’Regan argues that seeing is a way of exploring the world, and that this approach helps us understand consciousness. O’Regan is interested in applying his ideas to the modeling of consciousness in robots. Hubert Dreyfus has raised a range of objections to traditional approaches to artificial intelligence, based on his reading of Heidegger. In light of this, I explore here ways in which O’Regan’s approach meets these Heideggerian considerations, and ways in which his account is more Heideggerian than that of Dreyfus. Despite these successes, O’Regan leaves out any role for emotion. This is an area where a Heideggerian perspective may offer useful insights into what more is needed for the sense of self O’Regan includes in his account in order for a robot to feel.
-
In this paper, it will be argued that common sense knowledge has not a unitary structure. It is rather articulated at two different levels: a deep and a superficial level of common sense. The deep level is based on know-how procedures, on metaphorical frames built on imaginative bodily representations, and on a set of adaptive behaviors. Superficial level includes beliefs and judgments. They can be true or false and are culture dependent. Deep common sense is unavailable for any fast change, because it depends more on human biology than on cultural conventions. The deep level of common sense is characterized by a sensorimotor representational format, while the superficial level is largely made by propositional entities. This difference can be considered as a constraint for machine consciousness design, insofar this latter should be based on a reliable model of common sense knowledge.
-
When trying to solve classification or time-series prediction problem statements by the application of Artificial Neural Networks (ANNs), commonly applied structures like feed forward or recurrent Multi-Layer Perceptrons (MLP) characteristically tend to come up with bad performance and accuracy. This is especially the case when dealing with manifold datasets containing numerous input (predictors) and/or targetattributes and independent from the applied learning methods, activation functions, biases, etc... The cortical ANN, inspired by theoretical aspects of the human consciousness and its signal processing, is an ANN structure having been developed during the research phase of the “System applying High Order Computational Intelligence” (SHOCID) project. Due to its structure, redundancy and error-tolerance is being created, which helps to elude the latterly mentioned problems. Within this elaboration, the cortical ANN is being introduced, as well as an algorithm for evolving this special ANN types' structure until the most suitable solution has been detected.
-
Following arguments put forward in my book (Why red doesn’t sound like a bell: understanding the feel of consciousness. Oxford University Press, New York, USA, 2011), this article takes a pragmatic, scientist’s point of view about the concepts of consciousness and “feel”, pinning down what people generally mean when they talk about these concepts, and then investigating to what extent these capacities could be implemented in non-biological machines. Although the question of “feel”, or “phenomenal consciousness” as it is called by some philosophers, is generally considered to be the “hard” problem of consciousness, the article shows that by taking a “sensorimotor” approach, the difficulties can be overcome. What remains to account for are the notions of so-called “access consciousness” and the self. I claim that though they are undoubtedly very difficult, these are not logically impossible to implement in robots.
-
The potential for the near-future development of two technologies — artificial forms of intelligence, as well as the ability to "upload" human minds into artificial forms — raises several ethical questions regarding the proper treatment and understanding of these artificial minds. The crux of the dilemma is whether or not such creations should be accorded the same rights we currently grant humans, and this question seems to hinge upon whether they will exhibit their own "subjectivity", or internal viewpoints. Recognizing this as the essential factor yields some ethical guidance, but these issues need further exploration before such technologies become available.
-
The main motivation for this work is to investigate the advantages provided by machine consciousness, while in the control of software agents. In order to pursue this goal, we developed a cognitive architecture, with different levels of machine consciousness, targeting the control of artificial creatures. As a standard guideline, we applied cognitive neuroscience concepts to incrementally develop the cognitive architecture, following the evolutionary steps taken by the animal brain. The triune brain theory proposed by MacLean, together with Arrabale's "ConsScale", serve as roadmaps to achieve each developmental stage, while iCub — a humanoid robot and its simulator — serve as a platform for the experiments. A completely codelet-based system "Core" has been implemented, serving the whole architecture.
-
Shanahan's work admirably and convincingly supports Baars' global workspace by means of plausible and updated neural models. Yet little of his work is related with the issue of consciousness as phenomenal experience. He focuses his effort mostly on the behavioral correlates of consciousness like autonomy, flexibility, and information integration. Moreover, although the importance of embodiment and situated cognition is emphasized, most of the conceptual tools suggested (dynamic systems, complex networks, global workspace) require the external world only during their development. Leaving aside the issue of phenomenal experience, the book fleshes out a convincing and thought-provoking model for many aspects of conscious behaviour.
-
A brain model based on glial-neuronal interactions is proposed. Glial-neuronal synaptic units are interpreted as elementary reflection mechanisms, called proemial synapses. In glial networks (syncytia), cyclic intentional programs are generated, interpreted as auto-reflective intentional programming. Both types of reflection mechanisms are formally described and may be implementable in a robot brain. Based on the logic of acceptance and rejection, the robot is capable of rejecting irrelevant environmental information, showing at least a "touch" of subjective behavior. Since reflective intentional programming generates both relevant and irrelevant structures already within the brain, ontological gaps arise which must be integrated. In the human brain, the act of self-reference may exert a holistic function enabling self-consciousness. However, since the act of self-reference is a mysterious function not experimentally testable in brain research, it cannot be implemented in a robot brain. Therefore, the creation of self-conscious robots may never be possible. Finally, some philosophical implications are discussed.
-
Cognitive theories of consciousness should provide effective frameworks to implement machine consciousness. The Global Workspace Theory is a leading theory of consciousness which postulates that the primary function of consciousness is a global broadcast that facilitates recruitment of internal resources to deal with the current situation as well as modulate several types of learning. In this paper, we look at architectures for machine consciousness that have the Global Workspace Theory as their basis and discuss the requirements in such architectures to bring about both functional and phenomenal aspects of consciousness in machines.
-
Artificial consciousness is still far from being an established discipline. We will try to outline some theoretical assumption that could help in dealing with phenomenal consciousness. What are the technological and theoretical obstacles that face the enthusiast scholars of artificial consciousness? After presenting an outline of the state of artificial consciousness, we will focus on the relevance of phenomenal consciousness. Artificial consciousness needs to tackle the issue of phenomenal consciousness in a physical world. Up to now, the only models that give some hope of succeeding are the various kinds of externalism.
-
In this mind-expanding book, scientific pioneer Marvin Minsky continues his groundbreaking research, offering a fascinating new model for how our minds work. He argues persuasively that emotions, intuitions, and feelings are not distinct things, but different ways of thinking. By examining these different forms of mind activity, Minsky says, we can explain why our thought sometimes takes the form of carefully reasoned analysis and at other times turns to emotion. He shows how our minds progress from simple, instinctive kinds of thought to more complex forms, such as consciousness or self-awareness. And he argues that because we tend to see our thinking as fragmented, we fail to appreciate what powerful thinkers we really are. Indeed, says Minsky, if thinking can be understood as the step-by-step process that it is, then we can build machines -- artificial intelligences -- that not only can assist with our thinking by thinking as we do but have the potential to be as conscious as we are. Eloquently written, The Emotion Machine is an intriguing look into a future where more powerful artificial intelligences await.
-
Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing. The purpose of consciousness, from an evolutionary perspective, is often held to have something to do with the allocation and organization of scarce cognitive resources. This chapter describes Daniel Dennett's idea of the intentional stance, in which an observer explains a system's behavior by invoking such intentional categories as beliefs and goals. The computationalist theory of phenomenal consciousness ends up looking like a spoil-sport's explanation of a magic trick. The chapter focuses on critiques that are specifically directed at computational models of consciousness, as opposed to general critiques of materialist explanation. The contribution of artificial intelligence to consciousness studies has been slender so far, because almost everyone in the field would rather work on better defined, less controversial problems.
-
Thinking and being conscious are two fundamental aspects of the subject. Although both are challenging, often conscious experience has been considered more elusive (Chalmers 1996). However, in recent years, several researchers addressed the hypothesis of designing and implementing models for artificial conscious-ness—on one hand there is hope of being able to design a model for consciousness, on the other hand the actual implementations of such models could be helpful for understanding consciousness. The traditional field of Artificial Intelligence is now flanked by the seminal field of artificial or machine consciousness. In this chapter I will analyse the current state of the art of models of consciousness and then I will outline an externalist theory of the conscious mind that is compatible with the design and implementation of an artificial conscious being. As I argue in the following, this task can be profitably approached once we abandon the dualist framework of traditional Cartesian substance metaphysics and adopt a process-metaphysical stance. Thus, I sketch an alternative externalist process-based ontological framework. From within this framework, I venture to suggest a series of constraints for a conscious oriented architecture.
-
The study of several theories and models of consciousness, among them the functional and cognitive model exhibited in Baars’ ‘Global Workspace’ theory, led us to identify computational correlates of consciousness and discuss their possible representations within a model of intelligent agent. We first review a particular agent implementation given by an abstract machine, and then identify the extensions required in order to accommodate the main attributes of consciousness. This amounts to form unconscious processor coalitions that result in the creation of contexts. These extensions can be formulated within a reified virtual machine encompassing a representation of the original machine as well as an additional introspective component.