Search

Full bibliography 558 resources

  • Companion or ‘pet’ robots can be expected to be an important part of a future in which robots contribute to our lives in many ways. An understanding of emotional interactions would be essential to such robots’ behavior. To improve the cognitive and behavior systems of such robots, we propose the use of an artificial topological consciousness that uses a synthetic neurotransmitter and motivation, including a biologically inspired emotion system. A fundamental aspect of a companion robot is a cross-communication system that enables natural interactions between humans and the robot. This paper focuses on three points in the development of our proposed framework: (1) the organization of the behavior including inside-state emotion regarding the phylogenetic consciousness-based architecture; (2) a method whereby the robot can have empathy toward its human user’s expressions of emotion; and (3) a method that enables the robot to select a facial expression in response to the human user, providing instant human-like ‘emotion’ and based on emotional intelligence (EI) that uses a biologically inspired topological online method to express, for example, encouragement or being delighted. We also demonstrate the performance of the artificial consciousness based on the complexity level and a robot’s social expressions that are designed to enhance the users affinity with the robot.

  • Currently, the rapid development of non-industrial robots that are designed with artificial intelligence (AI) methods to improve the robotics system is to have them imitate human thinking and behavior. Therefore, our works have focused on studying and investigating the application of brain-inspired technology for developing the conscious behavior robot (Conbe-I). We created the hierarchical structure model, which is called “Consciousness-Based Architecture: CBA” module, but it has limitation in managing and selecting the behavior that only depends on the increase and decrease of the motivation levels. Consequently, in this paper, we would like to introduce the dynamic behavior selection model based on emotional states, which develops by Self-organizing map learning and Markov model in order to define the relationship between the behavioral selection and emotional expression model. We confirm the effectiveness of the proposed system with the experimental results.

  • This paper aims to develop the research based on a pet robot and its artificial consciousness. We propose the animal behavior and emotion using the artificial neurotransmitter and motivation. This research still implements the communication between human and a pet robot respecting to a social cognitive and interaction. Thus, the development of cross-creature communication is crucial for friendly companionship. This system focuses on three points. The first that is the organization of the behavior and emotion model regarding the phylogenesis. The second is the method of the robot that can have empathy with user expression. The third is how the robot can socially perform its expression to human for encouragement or being delighted based on its own emotion and the human expression. This paper eventually presents the performance and the experiment that the robot using cross-perception and cross-expression between animal robot and social interaction of human communication based on the consciousness based architecture (CBA).

  • This note reports on interdisciplinary approaches to model consciousness, an aspect of self-awareness in particular, aiming at artificial consciousness that can be mounted on an autonomous and mobile robot. For self-awareness to emerge, the self-identification process plays an important role. Self-awareness would emerge when self-locating in a self-created map in robot navigation; when solving self-related problems in (a self-related version of) the frame problem; and when a singularity arises in mapping the reference point in mathematical mappings.

  • Although many models of consciousness have been proposed from various viewpoints, they have not been based on learning activities in a whole system with the capabilities of autonomous adaptation. We have been investigating a simplified system using artificial neural nodes to clarify the functions and configuration needed for learning in a system that autonomously adapts to the environment. We demonstrated that phenomenal consciousness is explained using a method of "virtualization" in the information system and that learning activities in a whole system adaptation are related to consciousness. However, we have not sufficiently clarified the learning activities of such a system. Consciousness is basically modeled as a system-level learning activity to modify both its own configuration and states in autonomous adaptation through investigating learning activities as a whole system. The model not only explains the time delay in Libet's experiment, but is also positioned as an improved model of Global Workspace Theory (GWT).

  • The paper proposes the design approach as a blueprint for building a sentient artificial agent capable of exhibiting humanlike attributions of consciousness. The paper also considers whether if such an artificial agent is ever built, how it will be indistinguishable from a human being? Well, it is glowingly evident that the evolution of artificial intelligence is guided by us, humans, whose own mental evolution have been shaped by the passing years in the course of the phenomenology of adaptation and survival (Darwinian). Yet, the evolution of synthetic minds powered by artificial cognition seems to be quite fast. Yes, the artificial mind in robots, if we accept the analogy 'mind' in its fullest sense, that day is not very far when the mental embodiment of consciousness in machines would become reality. But prior to such a feat becoming reality, rhetoric debates have been taking shape as of, how to decode and cipher consciousness in machines, a phenomenon considered as often as 'nonentity', then, what would be the true essence of such an artificial consciousness? This paper discusses these aspects and attempts to throw some new light on the design and developmental aspects of artificial consciousness.

  • In recent years, a classic problem regarding designing of artificial minds embedded with synthetic consciousness has resurfaced in the tune of; 1) building a machine or robots that would closely mimic human behavior, and 2) the problem of embodiment of consciousness in artificial forms in such entities. These two problems boil down to the pure consideration as well of standardization of another aspect- the design concepts; of whether they would look-alike human beings in artificial flesh and skin, or rather be designed entirety as original architecture having shape-implicit forms of embodied cognition which could stand as true peers of human race. The first problem is to deal with the art and science of imitating human behavior, whereas, the subsequent problems should specifically deal with the predicament of abstraction and embodiment of mental attributions primarily, consciousness in machines. Whilst the final dilemma could be the consideration of some standard design models that would likely reflect the nature of such embodied consciousness. In such endeavor, I discuss both the design approach to imitate human abilities in machines as well, the modeling of human consciousness in robots within some relational framework for orientation of mental attributions in such sense that would satisfy evolution of robot consciousness.

  • Synthetic phenomenology typically focuses on the analysis of simplified perceptual signals with small or reduced dimensionality. Instead, synthetic phenomenology should be analyzed in terms of perceptual signals with huge dimensionality. Effective phenomenal processes actually exploit the entire richness of the dynamic perceptual signals coming from the retina. The hypothesis of a high-dimensional buffer at the basis of the perception loop that generates the robot synthetic phenomenology is analyzed in terms of a cognitive architecture for robot vision the authors have developed over the years. Despite the obvious computational problems when dealing with high-dimensional vectors, spaces with increased dimensionality could be a boon when searching for global minima. A simplified setup based on static scene analysis and a more complex setup based on the CiceRobot robot are discussed.

  • The function of the brain is intricately woven into the fabric of time. Functions such as (i) storing and accessing past memories, (ii) dealing with immediate sensorimotor needs in the present, and (iii) projecting into the future for goal-directed behavior are good examples of how key brain processes are integrated into time. Moreover, it can even seem that the brain generates time (in the psychological sense, not in the physical sense) since, without the brain, a living organism cannot have the notion of past nor future. When combined with an evolutionary perspective, this seemingly straightforward idea that the brain enables the conceptualization of past and future can lead to deeper insights into the principles of brain function, including that of consciousness. In this paper, we systematically investigate, through simulated evolution of artificial neural networks, conditions for the emergence of past and future in simple neural architectures, and discuss the implications of our findings for consciousness and mind uploading.

  • Whole brain emulation aims to re-implement functions of a mind in another computational substrate by carefully emulating the function of fundamental components, and by copying the connectivity between those components. The precision with which this is done must enable prediction of the natural development of active states. To accomplish this, in vivo measurements at large scale and high resolution are critically important. We propose a set of requirements for these empirical measurements. We then outline general methods leading to acquisition of a structural and functional connectome, and to the characterization of responses at large scale and high resolution. Finally, we describe two new project developments that tackle the problem of functional recording in vivo, namely the "molecular ticker-tape" and the integrated-circuit "Cyborcell".

  • Self-aware individuals are more likely to consider whether their actions are appropriate in terms of public self-consciousness, and to use that information to execute behaviors that match external standards and/or expectations. The learning concepts through which individuals monitor themselves have generally been overlooked by artificial intelligence researchers. Here we report on our attempt to integrate a self-awareness mechanism into an agent's learning architecture. Specifically, we describe (a) our proposal for a self-aware agent model that includes an external learning mechanism and internal cognitive capacity with super-ego and ego characteristics; and (b) our application of a version of the iterated prisoner's dilemma representing conflicts between the public good and private interests to analyze the effects of self-awareness on an agent's individual performance and cooperative behavior. Our results indicate that self-aware agents that consider public self-consciousness utilize rational analysis in a manner that promotes cooperative behavior and supports faster societal movement toward stability. We found that a small number of self-aware agents are sufficient for improving social benefits and resolving problems associated with collective irrational behaviors.

  • This paper addresses the problem of human–computer interactions when the computer can interpret and express a kind of human-like behavior, offering natural communication. A conceptual framework for incorporating emotions with rationality is proposed. A model of affective social interactions is described. The model utilizes the SAIBA framework, which distinguishes among several stages of processing of information. The SAIBA framework is extended, and a model is realized in human behavior detection, human behavior interpretation, intention planning, attention tracking behavior planning, and behavior realization components. Two models of incorporating emotions with rationality into a virtual artifact are presented. The first one uses an implicit implementation of emotions. The second one has an explicit realization of a three-layered model of emotions, which is highly interconnected with other components of the system. Details of the model with implicit implementation of emotional behavior are shown as well as evaluation methodology and results. Discussions about the extended model of an agent are given in the final part of the paper.

  • In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious.

  • Machine consciousness is not only a technological challenge, but a new way to approach scientific and theoretical issues which have not yet received a satisfactory solution from AI and robotics. We outline the foundations and the objectives of machine consciousness from the standpoint of building a conscious robot.

  • The term "synthetic phenomenology" refers to: 1) any attempt to characterize the phenomenal states possessed, or modeled by, an artefact (such as a robot); or 2) any attempt to use an artefact to help specify phenomenal states (independently of whether such states are possessed by a naturally conscious being or an artefact). The notion of synthetic phenomenology is clarified, and distinguished from some related notions. It is argued that much work in machine consciousness would benefit from being more cognizant of the need for synthetic phenomenology of the first type, and of the possible forms it may take. It is then argued that synthetic phenomenology of the second type looks set to resolve some problems confronted by standard, non-synthetic attempts at characterizing phenomenal states. An example of the second form of synthetic phenomenology is given.

  • The functional capabilities that consciousness seems to provide to biological systems can supply valuable principles in the design of more autonomous and robust technical systems. These functional concepts keep a notable similarity to those underlying the notion of operating system in software engineering, which allows us to specialize the computer metaphor for the mind into that of the operating system metaphor for consciousness. In this article, departing from these ideas and a model-based theoretical framework for cognition, we present an architectural proposal for machine consciousness, called the Operative Mind. According to it, machine consciousness could be implemented as a set of services, in an operative system fashion, based on modeling of the own control architecture, that supervise the adequacy of the system architectural structure to the current objectives, triggering and managing adaptativity mechanisms.

  • Artificial intelligence (AI) and the research on consciousness have reciprocally influenced each other – theories about consciousness have inspired work on AI, and the results from AI have changed our interpretation of the mind. AI can be used to test theories of consciousness and research has been carried out on the development of machines with the behavior, cognitive characteristics and architecture associated with consciousness. Some people have argued against the possibility of machine consciousness, but none of these objections are conclusive and many theories openly embrace the possibility that phenomenal consciousness could be realized in artificial systems.

  • Consciousness is a tremendously complex phenomenon. We examined the configurations and functions of an autonomously adaptive system that can adapt to an environment without a teacher to understand this complex phenomenon in the easiest way possible, and proposed a modeling method of consciousness on the system. In modeling of consciousness, it is important to note the difference between phenomenal consciousness and functional consciousness. To clarify the difference, a model with two layers, a physical layer and a logical layer, is proposed. The functions of primitive consciousness on the autonomously adaptive system were clarified on the model. The physical layer is composed of an artificial neural node. All signals are processed in detail by the neural nodes. Contrarily, minimum information, necessary for the system to adapt itself, selected from the physical layer composes the logical layer. The operations in the logical layer are represented by interactions between only the selected information. Our daily conscious phenomenon is expressed on the logical layer.

  • Consciousness is often thought to be that aspect of mind that is least amenable to being understood or replicated by artificial intelligence (AI). The first-personal, subjective, what-it-is-like-to-be-something nature of consciousness is thought to be untouchable by the computations, algorithms, processing and functions of AI method. Since AI is the most promising avenue toward artificial consciousness (AC), the conclusion many draw is that AC is even more doomed than AI supposedly is. The objective of this paper is to evaluate the soundness of this inference. Methods The results are achieved by means of conceptual analysis and argumentation. Results and conclusions It is shown that pessimism concerning the theoretical possibility of artificial consciousness is unfounded, based as it is on misunderstandings of AI, and a lack of awareness of the possible roles AI might play in accounting for or reproducing consciousness. This is done by making some foundational distinctions relevant to AC, and using them to show that some common reasons given for AC scepticism do not touch some of the (usually neglected) possibilities for AC, such as prosthetic, discriminative, practically necessary, and lagom (necessary-but-not-sufficient) AC. Along the way three strands of the author's work in AC – interactive empiricism, synthetic phenomenology, and ontologically conservative heterophenomenology – are used to illustrate and motivate the distinctions and the defences of AC they make possible.

Last update from database: 3/23/25, 8:36 AM (UTC)