Search
Full bibliography 558 resources
-
If artificial agents are to be created such that they occupy space in our social and cultural milieu, then we should expect them to be targets of folk psychological explanation. That is to say, their behavior ought to be explicable in terms of beliefs, desires, obligations, and especially intentions. Herein, we focus on the concept of intentional action, and especially its relationship to consciousness. After outlining some lessons learned from philosophy and psychology that give insight into the structure of intentional action, we find that attention plays a critical role in agency, and indeed, in the production of intentional action. We argue that the insights offered by the literature on agency and intentional action motivate a particular kind of computational cognitive architecture, and one that hasn’t been well-explicated or computationally fleshed out among the community of AI researchers and computational cognitive scientists who work on cognitive systems. To give a sense of what such a system might look like, we present the ARCADIA attention-driven cognitive system as first steps toward an architecture to support the type of agency that rich human–machine interaction will undoubtedly demand.
-
The term artificial intelligence can also be called machine intelligence, as we know everything became more systematic and programmed and manpower is reduced ,in such manner, man-made reasoning has a unique job in all the progression made today. Artificial intelligence frameworks are utilized in each stroll of our day by day life, in short we can say that our lives have additionally gotten further developed with the utilization of this AI innovation. The applications are operated in various fields like manufacturing units,business entities ,Medical sciences ,in the field of law and technology called driverless vehicles which can sense the environment broadly utilizes the idea of AI.The principle point or the main aim of the study is of is to make a general awareness about artificial intelligence to the public.The study has utilized 1850 respondents to comprehend their perspectives on Artificial Intelligence framework .Chi square, Anova are the statistical tools used for the study ,Through the examination it is discovered that individuals are exceptionally mindful of the innovation of man-made consciousness framework and they acknowledge that because of the headway of this innovation the human employments or human jobs are profoundly diminished and the 21 st century are in its boom to use machine intelligence and industries are in a situation where they can’t work without AI as because many of the challenges faced are solved by AI because many of the works are completed within a short span of time without facing any risks.
-
Angel, L. (2019). How To Build A Conscious Machine. Routledge. https://doi.org/10.4324/9780429033254
This book attempts to address both the engineering issue and the philosophical issue of a machine. It demonstrates the viability of the engineering project and presents the philosopher's specifications to the cognitive-scientist-cum-engineer as to what will count as a primitive android.
-
Abstract Throughout centuries philosophers have attempted to understand the disparity between the conscious experience and the material world – i.e., the problem of consciousness and the apparent mind–body dualism. Achievements in the fields of biology, neurology, and information science in the last century granted us more insight into processes that govern our minds. While there are still many mysteries to be solved when it comes to fully understanding the inner workings of our brains, new discoveries suggest stepping away from the metaphysical philosophy of mind, and closer to the computational viewpoint. In light of the advent of strong artificial intelligence and the development of increasingly complex artificial life models and simulations, we need a well-defined, formal theory of consciousness. In order to facilitate this, in this work we introduce mappism. Mappism is a framework in which alternative views on consciousness can be formally expressed in a uniform way, thus allowing one to analyze and compare existing theories, and enforcing the use of the language of mathematics, i.e, explicit functions and variables. Using this framework, we describe classical and artificial life approaches to consciousness.
-
The widest use of Artificial Intelligence (AI) technologies tends to uncontrolled growth. At the same time, in modern scientific thought there is no adequate understanding of the consequences of the introduction of artificial intelligence in the daily life of a person as its irremovable element. In addition, the very essence of what could be called the "thinking" of artificial intelligence remains the philosophical Terra Incognita. However, it is precisely the features of the flow of intelligent machine processes that, both from the point of view of intermediate goals, and in the sense of final results, can pose serious threats. Modeling the "phenomenology of AI" leads to the need to reformulate the central questions of the philosophy of consciousness, such as the "difficult problem of consciousness", and require the search for ways and means of articulation of the "human dimension" of reality for AI. Theoretical basis. The study is based on a phenomenological methodology, which is used in the model of artificial thinking. The implementation of Artificial Intelligence technologies is not accompanied by the development of a philosophy of human coexistence and AI. The algorithms underlying the activities of currently existing intellectual technologies do not guarantee that their intermediate and final results comply with ethical criteria. Today, one should ponder over nature and the purpose of separating physical reality in the primary for our Self mental stream. Originality of the research lies in the fact that the solution to the "hard problem of consciousness" is connected with the interpretation of qualia as the representation of the "physical" as related to bodily states. From this point of view, the resolution of the "hard problem of consciousness" can be associated with the interpretation of qualia as the representation of the "physical". In the "thinking process" of AI it is necessary to apply restrictions related to the fixation of the metaphysical meaning of the human body with precisely human parameters. Conclusions. It is necessary to take a different look at the connection between thinking and purposeful action, including due action, which means to look at ethics differently. "The basis of universal law" will then consist (including for AI), on the one hand, of preserving the parameters of material processes that are necessary for human existence, and on the other, of maintaining the integrity of that semantic universe, in relation to which certain senses only exist.
-
The subjective experience of consciousness is at once familiar and yet deeply mysterious. Strategies exploring the top-down mechanisms of conscious thought within the human brain have been unable to produce a generalized explanatory theory that scales through evolution and can be applied to artificial systems. Information Flow Theory (IFT) provides a novel framework for understanding both the development and nature of consciousness in any system capable of processing information. In prioritizing the direction of information flow over information computation, IFT produces a range of unexpected predictions. The purpose of this manuscript is to introduce the basic concepts of IFT and explore the manifold implications regarding artificial intelligence, superhuman consciousness, and our basic perception of reality.
-
This half-day Symposium explores themes of digital art, culture, and heritage, bringing together speakers from a range of disciplines to consider technology with respect to artistic and academic practice. As we increasingly see ourselves and life through a digital lens and the world communicated on digital screens, we experience altered states of being and consciousness in ways that blur the lines between digital and physical reality, while our ways of thinking and seeing become a digital stream of consciousness that flows between place and cyberspace. We have entered the postdigital world and are living, working, and thinking with machines as our computational culture driven by artificial intelligence and machine learning embeds itself in everyday life and threads across art, culture, and heritage, juxtaposing them in the digital profusion of human creativity on the Internet.
-
A socially intelligent robot must be capable to extract meaningful information in real time from the social environment and react accordingly with coherent human-like behavior. Moreover, it should be able to internalize this information, to reason on it at a higher level, build its own opinions independently, and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behavior and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters. In order to develop cognitive systems for social robotics with greater human-likeliness, we used an “understanding by building” approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modeling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control. The SEAI system is also enriched by a model that simulates the Damasio’s theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalization at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot’s beliefs and decisions have been tested in a physical humanoid involved in Human–Robot Interaction (HRI).
-
What is “self-awareness”? How can explicit consciousness and sub-consciousness be mapped in relation to each other? How are they related to the self? How can these entities be represented in an artificial conscious system? These questions are the focus of this article. People are aware of only the behavior that they are focusing on; they cannot be directly aware of routine behavior such as walking and breathing. The latter is generally called unconscious behavior, and here we call it sub-conscious behavior. To understand self-awareness, therefore, firstly it is important to map explicit consciousness and sub-consciousness, which is where the self is deeply involved. We consider that if there is no self that refers to itself, no one can be aware of what he himself is doing. In this study we map explicit consciousness and sub-consciousness using an artificial conscious system, and then make a new proposal about the relationship between self-awareness and the self.
-
This chapter describes the computer modeling of a psychic system that generates representations for a system that has an artificial corporeality. In this model, it defines how the sensation of thinking is formed in an artificial system and how such an artificial system can experience its idea generation. Next, the chapter discusses a multiagent approach to design the artificial psychic system. Further, it describes the organizational memory of the system. The organizational memory will be organized into networks of memory agents related through concepts of semantic proximities or semantic generalizations. The concepts of proximity, specialization and generalization can be precisely defined using qualifications related to the acquaintances of the agents. The chapter finally shows that the general psyche of an artificial system, when it is distributed over multiple corporeal systems with local artificial consciousnesses, can be unified.
-
Here is examined the work done in many laboratories on the proposition that the mechanisms underlying consciousness in living organisms can be studied using computational theories. This follows an agreement at a 2001 multi-disciplinary meeting of philosophers, neuroscientists and computer scientists that such a research programme was feasible and worthwhile. Here this effort is reviewed both as a historical statement and for the positions held at the time of going to print of this volume. The approaches cover diverse techniques ranging from the machine modeling of neural structures in the brain to abstract models based on the type of logic found in computer programming. Purely theoretical approaches based on hypotheses of what kind of information constitutes a mental state are included.
-
Reviewing recent closely related developments at the crossroads of biomedical engineering, artificial intelligence and biomimetic technology, in this paper, we attempt to distinguish phenomenological consciousness into three categories based on embodiment: one that is embodied by biological agents, another by artificial agents and a third that results from collective phenomena in complex dynamical systems. Though this distinction by itself is not new, such a classification is useful for understanding differences in design principles and technology necessary to engineer conscious machines. It also allows one to zero-in on minimal features of phenomenological consciousness in one domain and map on to their counterparts in another. For instance, awareness and metabolic arousal are used as clinical measures to assess levels of consciousness in patients in coma or in a vegetative state. We discuss analogous abstractions of these measures relevant to artificial systems and their manifestations. This is particularly relevant in the light of recent developments in deep learning and artificial life.
-
A discussion on how artificial machines with natural intelligence would be safe or not is made based on scientific, philosophical and theological arguments. The finite or infinite nature of the universe is discussed and the implications analyzed. The concepts of destiny and free will are considered, with implications on what it would mean to create an artificial consciousness and how it would be possible to give it or deny it its free will. Computer experiments are carried out based on cellular automata and the results considered. A thorough discussion follows and a conclusion is reached.
-
Human consciousness is a target of research in multiple fields of knowledge, that presents it as an important characteristic to better handle complex and diverse situations. Artificial consciousness models have arose, together with theories that attempt to model what we understand about consciousness, in a way that could be implemented an artificial conscious being. The main motivations to study artificial consciousness are related to the creation of agents more similar to human beings, in order to build more efficient machines. This paper presents an experiment using the Global Workspace Theory and the LIDA Model to build a "conscious" mobile robot in a virtual environment, using the LIDA framework as a implementation of the LIDA Model. The main objective is to evaluate if it is possible to use conscience as implemented by the LIDA framework to simplify decision making processes during navigation of a mobile robot subject to interaction with people, as part of a cicerone robot development.
-
Machine consciousness is a young research field, yet inspired by oldest intellectual disciplines like philosophy of mind. Specifically, the mind–body problem has been approached since ancient times and different accounts have been proposed along the centuries. While none of these accounts, like different forms of dualism, have been seen as useful working hypotheses in the domain of machine consciousness, their influence might have shaped the orientation of this research field towards a frantic search for an illusory and unachievable bridge for the explanatory gap. In his book, Consciousness and Robot Sentience, Haikonen seems to claim back the predominant position that engineering should have in a domain, where we are supposed to deliver pragmatic solutions. In this regard, Haikonen is actually bridging the gap between the philosophical discourse and the practical engineering approach. This is a remarkable movement as Haikonen is essentially claiming that his cognitive architecture is a proof of the inexistence of such a thing as a mind–body problem. In this book review, I analyze the implications, limitations, and prospects of this engineering stance, looking at the main contributions and those aspects that might require further explanation.
-
One of the primary ethical issues related to creating robust artificial intelligences is how to engineer them to ensure that they will behave morally — i.e. that they will consider and treat us appropriately. A much less commonly discussed issue is what their moral status will be — i.e. how we ought to consider and treat them. In this chapter, John Basl takes up the issue of the moral status of artificial consciousnesses. He defends a capacity-based account of moral status, on which an entity’s moral status is determined by the capacities it has, rather than its origins or material composition. An implication of this is that if a machine intelligence has cognitive and psychological capacities like ours, then it would have comparable moral status to us. However, Basl argues that it is highly unlikely that machines will have capacities (and so interests) like ours, and that in fact it will be very difficult to know whether they are conscious and, if they are, what capacities and interests they have.
-
To develop “Artificial Consciousness” for a SELF requires investigation and understanding of what it means to be conscious. The textbook definition of consciousness is:
-
The stated aim of adherents to the paradigm called biologically inspired cognitive architectures (BICA) is to build machines that address "the challenge of creating a real-life computational equivalent of the human mind".(From the mission statement of the new BICA journal.) In contrast, practitioners of machine consciousness (MC) are driven by the observation that these human minds for which one is trying to find equivalents are generally thought to be conscious. (Of course, this is controversial because there is no evidence of consciousness in behavior. But as the hypothesis of the consciousness of others is commonly used, a rejection of it has to be considered just as much as its acceptance.) In this paper, it is asked whether those who would like to build computational equivalents of the human mind can do so while ignoring the role of consciousness in what is called the mind. This is not ignored in the MC paradigm and the consequences, particularly on phenomenological treatments of the mind, are briefly explored. A measure based on a subjective feel for how well a model matches personal experience is introduced. An example is given which illustrates how MC can clarify the double-cognition tenet of Strawson's cognitive phenomenology.
-
An artificial neural network called reaCog is described which is based on a decentralized, reactive and embodied architecture developed to control non-trivial hexapod walking in an unpredictable environment (Walknet) while using insect-like navigation (Navinet). In reaCog, these basic networks are extended in such a way that the complete system, reaCog, adopts the capability of inventing new behaviors and – via internal simulation of planning ahead. This cognitive expansion enables the reactive system to be enriched with additional procedures. Here, we focus on the question to what extent properties of phenomena to be characterized on a different level of description as for example consciousness can be found in this minimally cognitive system. Adopting a monist view, we argue that the phenomenal aspect of mental phenomena can be neglected when discussing the function of such a system. Under this condition, reaCog is discussed to be equipped with properties as are bottom-up and top-down attention, intentions, volition, and some aspects of Access Consciousness. These properties have not been explicitly implemented but emerge from the cooperation between the elements of the network. The aspects of Access Consciousness found in reaCog concern the above mentioned ability to plan ahead and to invent and guide (new) actions. Furthermore, global accessibility of memory elements, another aspect characterizing Access Consciousness is realized by this network. reaCog allows for both reactive/automatic control and (access-) conscious control of behavior. We discuss examples for interactions between both the reactive domain and the conscious domain. Metacognition or Reflexive Consciousness is not a property of reaCog. Possible expansions are discussed to allow for further properties of Access Consciousness, verbal report on internal states, and for Metacognition. In summary, we argue that already simple networks allow for properties of consciousness if leaving the phenomenal aspect aside.