Search

Full bibliography 558 resources

  • Will Artificial Intelligence soon surpass the capacities of the human mind and will Strong Artificial General Intelligence replace the contemporary Weak AI? It might appear to be so, but there are certain fundamental issues that have to be addressed before this can happen. There can be no intelligence without understanding, and there can be no understanding without getting meanings. Contemporary computers manipulate symbols without meanings, which are not incorporated in the computations. This leads to the Symbol Grounding Problem; how could meanings be incorporated? The use of self-explanatory sensory information has been proposed as a possible solution. However, self-explanatory information can only be used in neural network machines that are different from existing digital computers and traditional multilayer neural networks. In humans self-explanatory information has the form of qualitative sensory experiences, qualia. To have reportable qualia is to be phenomenally conscious. This leads to the hypothesis about an unavoidable connection between the solution of the Symbol Grounding Problem and consciousness. If, in general, self-explanatory information equals to qualia, then machines that utilize self-explanatory information would be conscious. The author presents the associative neural architecture HCA as a solution to these problems and the robot XCR-1 as its partial experimental verification.

  • This paper envisions the possibility of a Conscious Aircraft: an aircraft of the future with features of consciousness. To serve this purpose, three main fields are examined: philosophy, cognitive neuroscience, and Artificial Intelligence (AI). While philosophy deals with the concept of what is consciousness, cognitive neuroscience studies the relationship of the brain with consciousness, contributing toward the biomimicry of consciousness in an aircraft. The field of AI leads into machine consciousness. The paper discusses several theories from these fields and derives outcomes suitable for the development of a Conscious Aircraft, some of which include the capability of developing “world-models”, learning about self and others, and the prerequisites of autonomy, selfhood, and emotions. Taking these cues, the paper focuses on the latest developments and the standards guiding the field of autonomous systems, and suggests that the future of autonomous systems depends on its transition toward consciousness. Finally, inspired by the theories suggesting the levels of consciousness, guided by the Theory of Mind, and building upon state-of-the-art aircraft with autonomous systems, this paper suggests the development of a Conscious Aircraft in three stages: Conscious Aircraft with (1) System-awareness, (2) Self-awareness, and (3) Fleet-awareness, from the perspectives of health management, maintenance, and sustainment.

  • A.I.: Artificial Intelligence tells the story of a robot boy who has been engineered to love his human owner. He is abandoned by his owner and pursues a tragic quest to become a real boy so that he can be loved by her again. This chapter explores the philosophical, psychological, and scientific questions that are asked by A.I. It starts with A.I.’s representation of artificial intelligence and then covers the consciousness of robots, which is closely linked to ethical concerns about the treatment of AIs in the film. There is a discussion about how A.I.’s interpretation of artificial love relates to scientific work on emotion, and the chapter also examines connections between the technology portrayed in A.I. and current research on robotics.

  • Humans are highly intelligent, and their brains are associated with rich states of consciousness. We typically assume that animals have different levels of consciousness, and this might be correlated with their intelligence. Very little is known about the relationships between intelligence and consciousness in artificial systems. Most of our current definitions of intelligence describe human intelligence. They have severe limitations when they are applied to non-human animals and artificial systems. To address this issue, this chapter sets out a new interpretation of intelligence that is based on a system’s ability to make accurate predictions. Human intelligence is measured using tests whose results are converted into values of IQ and g-score. This approach does not work well with non-human animals and AIs, so people have been developing universal algorithms that can measure intelligence in any type of system. In this chapter a new universal algorithm for measuring intelligence is described, which is based on a system’s ability to make accurate predictions. Many people agree that consciousness is the stream of colorful moving noisy sensations that starts when we wake up and ceases when we fall into deep sleep. Several mathematical algorithms have been developed to describe the relationship between consciousness and the physical world. If these algorithms can be shown to work on human subjects, then they could be used to measure consciousness in non-human animals and artificial systems.

  • The realization of artificial empathy is conditional on the following: on the one hand, human emotions can be recognized by AI and, on the other hand, the emotions presented by artificial intelligence are consistent with human emotions. Faced with these two conditions, what we explored is how to identify emotions, and how to prove that AI has the ability to reflect on emotional consciousness in the process of cognitive processing, In order to explain the first question, this paper argues that emotion identification mainly includes the following three processes: emotional perception, emotional cognition and emotional reflection. It proposes that emotional display mainly includes the following three dimensions: basic emotions, secondary emotions and abstract emotions. On this basis, the paper proposes that the realization of artificial empathy needs to meet the following three cognitive processing capabilities: the integral processing ability of external emotions, the integral processing ability of proprioceptive emotions and the processing ability of integrating internal and external emotions. We are open to whether the second difficulty can be addressed. In order to gain the reflective ability of emotional consciousness for AI, the paper proposes that artificial intelligence should include consistency on identification of external emotions and emotional expression, processing of ontological emotions and external emotions, integration of internal and external emotions and generation of proprioceptive emotions.

  • This paper has as its research problem the following question: what is it like to be an artificial intelligence? It aims to critically analyze the epistemological and semantic aspects developed by Thomas Nagel in What is it like to be a bat and The View from Nowhere, demonstrating the relationship between physicalism and subjectivity and its application to artificially intelligent beings.  We chose to approach these two works because of the author's importance in analytical philosophy and the approach to consciousness. The analysis shows that the defense of artificial intelligence as a subject of law is intrinsically based on physicalism. However, in refuting it, Nagel does not offer an alternative outside the scope of dualism. Thus, the Procedural Theory of the Subject of Law is developed with stages of emancipation of the being against the law. As a result, it is verified that the reductive physicalist vision is insufficient to substantiate the condition of the subject of law of an artificial intelligence as a legal and political being in the social order. However, if the three stages of its formation (emancipation, interspecies recognition, and personification) are observed, the possibility of achieving the condition under analysis is assumed. It is concluded that it is unverifiable to know what it is like to be an artificial intelligence. In the current scientific stage, an artificially intelligent being cannot (yet) be considered a subject of law, under penalty of characterization of instrumentalism. The methodology of integrated, analytical, deductive, and bibliographic research is used to obtain these results and conclusions.

  • We’re experiencing a time when digital technologies and advances in artificial intelligence, robotics, and big data are redefining what it means to be human. How do these advancements affect contemporary media and music? This collection traces how media, with a focus on sound and image, engages with these new technologies. It bridges the gap between science and the humanities by pairing humanists’ close readings of contemporary media with scientists’ discussions of the science and math that inform them. This text includes contributions by established and emerging scholars performing across-the-aisle research on new technologies, exploring topics such as facial and gait recognition; EEG and audiovisual materials; surveillance; and sound and images in relation to questions of sexual identity, race, ethnicity, disability, and class and includes examples from a range of films and TV shows including Blade Runner, Black Mirror, Mr. Robot, Morgan, Ex Machina, and Westworld. Through a variety of critical, theoretical, proprioceptive, and speculative lenses, the collection facilitates interdisciplinary thinking and collaboration and provides readers with ways of responding to these new technologies.

  • Recent advances in artificial intelligence (AI) have achieved human-scale speed and accuracy for classification tasks. Current systems do not need to be conscious to recognize patterns and classify them. However, for AI to advance to the next level, it needs to develop capabilities such as metathinking, creativity, and empathy. We contend that such a paradigm shift is possible through a fundamental change in the state of artificial intelligence toward consciousness, similar to what took place for humans through the process of natural selection and evolution. To that end, we propose that consciousness in AI is an emergent phenomenon that primordially appears when two machines cocreate their own language through which they can recall and communicate their internal state of time-varying symbol manipulation. Because, in our view, consciousness arises from the communication of inner states, it leads to empathy. We then provide a link between the empathic quality of machines and better service outcomes associated with empathic human agents that can also lead to accountability in AI services.

  • It is noted that there are many different definitions of and views about qualia, and this makes qualia into a vague concept without much theoretical and constructive value. Here, qualia are redefined in a more general way. It is argued that the redefined qualia will be essential to the mind–body problem, the problem of consciousness and also to the symbol grounding problem, which is inherent in physical symbol systems. Then, it is argued that the redefined qualia are necessary for Artificial Intelligence systems for the operation with meanings. Finally, it is proposed that robots with qualia may be conscious.

  • A systematic understanding of the relationship between intelligence and consciousness can only be achieved when we can accurately measure intelligence and consciousness. In other work, I have suggested how the measurement of consciousness can be improved by reframing the science of consciousness as a search for mathematical theories that map between physical and conscious states. This paper discusses the measurement of intelligence in natural and artificial systems. While reasonable methods exist for measuring intelligence in humans, these can only be partly generalized to non-human animals and they cannot be applied to artificial systems. Some universal measures of intelligence have been developed, but their dependence on goals and rewards creates serious problems. This paper sets out a new universal algorithm for measuring intelligence that is based on a system’s ability to make accurate predictions. This algorithm can measure intelligence in humans, non-human animals and artificial systems. Preliminary experiments have demonstrated that it can measure the changing intelligence of an agent in a maze environment. This new measure of intelligence could lead to a much better understanding of the relationship between intelligence and consciousness in natural and artificial systems, and it has many practical applications, particularly in AI safety.

  • The ability of a computer to have a sense of humor, that is, to generate authentically funny jokes, has been taken by some theorists to be a sufficient condition for artificial consciousness. Creativity, the argument goes, is indicative of consciousness and the ability to be funny indicates creativity. While this line fails to offer a legitimate test for artificial consciousness, it does point in a possibly correct direction. There is a relation between consciousness and humor, but it relies on a different sense of “sense of humor,” that is, it requires the getting of jokes, not the generating of jokes. The question, then, becomes how to tell when an artificial system enjoys a joke. We propose a mechanism, the GHoST test, which may be useful for such a task and can begin to establish whether a system possesses artificial consciousness.

  • Artificial intelligence and robotics are opening up important opportunities in the field of health diagnosis and treatment support with aims like better patient follow-up. A social and emotional robot is an artificially intelligent machine that owes its existence to computer models designed by humans. If it has been programmed to engage in dialogue, detect and recognize emotional and conversational cues, adapt to humans, or even simulate humor, such a machine may on the surface seem friendly. However, such emotional simulation must not hide the fact that the machine has no consciousness.

  • This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” reflected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possibility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Turing test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify concepts as well as participate in social practices.

  • This paper explores some of the potential connections between natural and artificial intelligence and natural and artificial consciousness. In humans we use batteries of tests to indirectly measure intelligence. This approach breaks down when we try to apply it to radically different animals and to the many varieties of artificial intelligence. To address this issue people are starting to develop algorithms that can measure intelligence in any type of system. Progress is also being made in the scientific study of consciousness: we can neutralize the philosophical problems, we have data about the neural correlates and we have some idea about how we can develop mathematical theories that can map between physical and conscious states. While intelligence is a purely functional property of a system, there are good reasons for thinking that consciousness is linked to particular spatiotemporal patterns in specific physical materials. This paper outlines some of the weak inferences that can be made about the relationships between intelligence and consciousness in natural and artificial systems. To make real scientific progress we need to develop practical universal measures of intelligence and mathematical theories of consciousness that can reliably map between physical and conscious states.

  • This work seeks to study the beneficial properties that an autonomous agent can obtain by implementing a cognitive architecture similar to the one of conscious beings. Along this document, a conscious model of autonomous agent based in a global workspace architecture is presented. We describe how this agent is viewed from different perspectives of philosophy of mind, being inspired by their ideas. The goal of this model is to create autonomous agents able to navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings in order to find the best possible position in base of its inner preferences. The purpose of the model is to test the effectiveness of many cognitive mechanisms that are incorporated, such as an attention mechanism for magnitude selection, pos-session of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating a global workspace which controls and integrates information processed by all the subsystem of the model. We show in a large experiment set how an autonomous agent can benefit from having a cognitive architecture such as the one described.

  • The popular expectation is that Artificial Intelligence (AI) will soon surpass the capacities of the human mind and Strong Artificial General Intelligence (AGI) will replace the contemporary Weak AI. However, there are certain fundamental issues that have to be addressed before this can happen. There can be no intelligence without understanding, and there can be no understanding without getting meanings. Contemporary computers manipulate symbols without meanings, which are not incorporated in the computations. This leads to the Symbol Grounding Problem; how could meanings be incorporated? The use of self-explanatory sensory information has been proposed as a possible solution. However, self-explanatory information can only be used in neural network machines that are different from existing digital computers and traditional multilayer neural networks. In humans, self-explanatory information has the form of qualia. To have reportable qualia is to be phenomenally conscious. This leads to the hypothesis about an unavoidable connection between the solution of the Symbol Grounding Problem and consciousness. If, in general, self-explanatory information equals to qualia, then machines that utilize self-explanatory information would be conscious.

  • The past century has seen a resurgence of interest in the study of consciousness among scholars of various fields, from philosophy to psychology and neuroscience. Since the birth of Artificial Intelligence in the 1950s, the study of consciousness in machines has received an increasing amount of attention in computer science that gave rise to the new field of machine consciousness (MC). Meanwhile, interdisciplinary research in philosophy, neuroscience, and cognitive science has advanced neurocognitive theories for consciousness. Among many models proposed for consciousness, the Global Workspace Theory (GWT) is a promising theory of consciousness that has received a staggering amount of philosophical and empirical support in the past decades. This dissertation discusses the GWT and its potentials for MC from a mechanistic point of view. To do so, Chapter 1 gives an overview of the philosophical study of consciousness and the history of MC. Then, in Chapter 2, mechanistic explanations and tri-level models are introduced, which provide a robust framework to construct and assay various theories of consciousness. In Chapter 3, neural correlates (and thereby, neurocognitive theories) of consciousness are introduced. This chapter presents the GWT in details and, along with its strengths, discusses the philosophical issues it raises. Chapter 4 addresses two computational implementations of the GWT (viz., IDA and LIDA) which satisfy specific goals of MC. Finally, in Chapter 5, one of the philosophical problems of MC, namely, the Frame Problem (FP), is introduced. It is argued that the architectures based on the GWT are immune to the FP. The chapter concludes that the GWT is capable of "solving" the FP, and discusses its implications for MC and the computational theory of mind. Chapter 6 wraps up the dissertation by reviewing the content.

  • The relatively new field of artificial intelligence (AI), which is defined as intelligence performed by machines, is crucial for progress in many disciplines in today's society, including medical diagnostics, electronic trading, robotic process automation in finance, healthcare, education, transportation and many more. However, until now, AIs were only capable of performing very specific tasks such as low-level visual recognition, speech recognition, coordinated motor control, and pattern detection. What we still need to achieve is a form of everyday human-level performance that is based on common sense, where AI are able to carry out adaptable planning and task execution and possess meaning-based natural language capabilities and generation. These are considered to be “conscious” or “creative” activities that are naturally part of our daily lives and which we execute without great mental effort. Developing conscious AI will allow us to gain knowledge and further our understanding about how consciousness works. In order to develop conscious and creative AI, machines must be self-aware; however, we hypothesize that current AI developments are skipping the most important step which will lead to AGIs: introspection (self-analysis and awareness).

Last update from database: 3/23/25, 8:36 AM (UTC)