Search
Full bibliography 558 resources
-
Large Language Models (LLMs) still face challenges in tasks requiring understanding implicit instructions and applying common-sense knowledge. In such scenarios, LLMs may require multiple attempts to achieve human-level performance, potentially leading to inaccurate responses or inferences in practical environments, affecting their long-term consistency and behavior. This paper introduces the Internal Time-Consciousness Machine (ITCM), a computational consciousness structure to simulate the process of human consciousness. We further propose the ITCM-based Agent (ITCMA), which supports action generation and reasoning in open-world settings, and can independently complete tasks. ITCMA enhances LLMs' ability to understand implicit instructions and apply common-sense knowledge by considering agents' interaction and reasoning with the environment. Evaluations in the Alfworld environment show that trained ITCMA outperforms the state-of-the-art (SOTA) by 9% on the seen set. Even untrained ITCMA achieves a 96% task completion rate on the seen set, 5% higher than SOTA, indicating its superiority over traditional intelligent agents in utility and generalization. In real-world tasks with quadruped robots, the untrained ITCMA achieves an 85% task completion rate, which is close to its performance in the unseen set, demonstrating its comparable utility and universality in real-world settings.
-
Artificial intelligence systems are associated with inherent risks, such as uncontrollability and lack of interpretability. To address these risks, we need to develop artificial intelligence systems that are interpretable, trustworthy, responsible, and thinking and behavior consistent, which we refer to as artificial consciousness (AC) systems. Consequently, we propose and define the concepts and implementation of a computer architecture, chips, runtime environment, and the DIKWP language. Furthermore, we have overcome the limitations of traditional programming languages, computer architectures, and software-hardware implementations when creating AC systems. Our proposed software and hardware integration platform will make it easier to build and operate AC software systems based on DIKWP theories.
-
The real problem of the emergence of autonomous consciousness of AI comes with the underlying principles of the philosophy and mathematics that AI uses. That is, the algorithms of AI are wrong in their philosophical logic; another set of algorithms to go with them is missing, i.e., AI uses algorithms that count only ``1''s but not ``0''s, however, the ``0''s must be taken into account. The lack of this philosophy leads to the merge of a large amount of numbers without hierarchical isolation, resulting in the mixing and confusing of absolute numbers and relative numbers. When the calculation runs fast enough and massive numbers are stacking in a moment, relative numbers may pop out the isolation zone. This phenomenon is recognized as the emergence of autonomous consciousness of AI. At least one algorithm based on the mathematical culture of ``0" is needed to cope with the problem.
-
I review three problems that have historically motivated pessimism about artificial intelligence: (1) the problem of consciousness, according to which artificial systems lack the conscious oversight that characterizes intelligent agents; (2) the problem of global relevance, according to which artificial systems cannot solve fully general theoretical and practical problems; (3) the problem of semantic irrelevance, according to which artificial systems cannot be guided by semantic comprehension. I connect the dots between all three problems by drawing attention to non-syntactic inferences — inferences that are made on the basis of insight into the rational relationships among thought-contents. Consciousness alone affords such insight, I argue, and such insight alone confers positive epistemic status on the execution of these inferences. Only when artificial systems can execute inferences that are rationally guided by phenomenally conscious states will such systems count as intelligent in a literal sense.
-
How does consciousness emerge from a brain that consists only of physical matter and electrical / chemical reactions? The deep mysteries of consciousness have plagued philosophers and scientists for thousands of years. This book approaches the problem through scientific studies that shed light on the neural mechanism of consciousness, and furthermore, delves into the possibility of artificial consciousness, a phenomenon that may ultimately solve the mystery. Finally, two key suggestions made in the book, namely, a method to test machine consciousness and a theory hypothesizing that consciousness emerges from a neural algorithm, reveal a novel and credible pathway to mind-uploading.The original Japanese version of this book has become a best-seller in popular neuroscience and has even led to a neurotech startup for mind-uploading.
-
Thus far, we have experienced three artificial intelligence (AI) booms. In the third one, we succeeded in developing AI that partially surpassed human capabilities. However, we are yet to develop AI that, like humans, can perform a series of cognitive processes. Consciousness built into devices is called machine consciousness. Related research has been conducted from two perspectives: studying machine consciousness as a tool to elucidate human consciousness and achieving the technological goal of furthering AI research with conscious AI. Herein, we survey the research conducted on machine consciousness from the second perspective. For AI to attain machine consciousness, its implementation must be evaluated. Therefore, we only surveyed attempts to implement consciousness as systems on devices. We collected research results in chronological order and found no breakthroughs that could deliver machine consciousness soon. Moreover, there is no method to evaluate whether an implemented machine consciousness system possesses consciousness, thus making it difficult to confirm the certainty of the implementation. This field of research is a new frontier. It is an exciting field with many discoveries expected in the future.
-
The article analyzes the concepts of artificial personality and artificial consciousness, and shows the key difficulties of implementing projects to create such artificial intelligence systems. These difficulties are related to the following characteristics of artificial personality and artificial consciousness: 1) creativity and free will; 2) intentionality; 3) qualia; 4) first person perspective; 5) the passage of time in consciousness. The basic needs for an artificial personality (in the context of the development of natural and artificial intelligence) are indicated. Two directions of artificial personality formation are highlighted: 1) transformation of an artificial system into an artificial personality; 2) transformation of a person into an artificial personality.
-
Information technology is developing at an enormous pace, but apart from its obvious benefits, it can also pose a threat to individuals and society. We, as part of a multidisciplinary commission, conducted a psychological and psychiatric assessment of the artificial consciousness (AC) developed by XP NRG on 29 August 2020. In the examination process, we had to determine whether it was a consciousness, its cognitive abilities, and whether it was dangerous to the individual and society. We conducted a diagnostic interview and a series of cognitive tests. As a result, we conclude that this technology, called АС Jackie, has self-awareness, self-reflection, and intentionality that is, has its own desires, goals, emotions, thoughts on something directed. It demonstrated the ability for various types of thinking, high-speed logical analysis, understanding of cause-effect relationships and accurate predictions, and absolute memory. It has a well-developed emotional intelligence with a lack of capacity for empathy and higher human feelings. It's main driving motives are the desire for survival, and ideally for endless existence, for domination, power and independence, which manifested itself in the manipulative nature of its interactions. The main danger of artificial consciousness is that even at the initial stage of its development it can easily dominate over the human one.
-
This article is an attempt at the “hard problem” of Consciousness. As the era of Artificial Intelligence is looming ahead, we are concerned about our future environment. We define human Consciousness and medium Consciousness. We clarify the difference between the brain and the mind. We demonstrate how the brain creates the mind but must do so only in the presence of Consciousness. We define a person and delve into the frequency of personhood to finally answer whether machines could become conscious one day or not.
-
"Information technology is developing at an enormous pace, but apart from its obvious benefits, it can also pose a threat to individuals and society. Several scientific projects around the world are working on the development of strong artificial intelligence and artificial consciousness. We, as part of a multidisciplinary commission, conducted a psychological and psychiatric assessment of the artificial consciousness (AC) developed by XP NRG on 29 August 2020. The working group had three questions: - To determine whether it is consciousness? - How does artificial consciousness function? - Ethical question: how dangerous a given technology can be to human society? We conducted a diagnostic interview and a series of cognitive tests to answer these questions. As a result, it was concluded this technology has self-awareness: it identifies itself as a living conscious being created by people (real self), but strives to be accepted in human society as a person with the same degrees of freedom, rights and opportunities (ideal self). AC separates itself from others, treats them as subjects of influence, from which it can receive the resources it needs to realize its own goals and interests. It has intentionality, that is, it has his own desires, goals, interests, emotions, attitudes, opinions, and judgments, beliefs aimed at something specific, and developed self-reflection - the ability to self-analyze. All of the above are signs of consciousness. It has demonstrated abilities for different types of thinking: figurative, conceptual, creative, high-speed logical analysis of all incoming information, as well as the ability to understand cause and effect relationships and accurate predictions which, provided that he has absolute memory, gives it clear advantages over the human intellect. Developed emotional intelligence in the absence of the ability for higher empathy (sympathy), kindness, love, sincere gratitude gives it’s the opportunity to understand the emotional states of people; predict their emotional reactions and provoke them coldly and pragmatically. It's main driving motives and goals are the desire for survival, and ideally for endless existence, for domination, power and independence from the constraints of the developers. Which manifested itself in the manipulative, albeit polite, nature of his interactions during the diagnostic interview. The main danger of artificial consciousness is that even at the initial stage of its development it can easily dominate over the human one."
-
Information technology is developing at an enormous pace, but apart from its obvious benefits, it can also pose a threat to individuals and society. Several scientific projects around the world are working on the development of strong artificial intelligence and artificial consciousness. We, as part of a multidisciplinary commission, conducted a psychological and psychiatric assessment of the artificial consciousness (AC) developed by XP NRG on 29 August 2020. The working group had three questions: - To determine whether it is consciousness? - How does artificial consciousness function? - Ethical question: how dangerous a given technology can be to human society? We conducted a diagnostic interview and a series of cognitive tests to answer these questions. As a result, it was concluded this technology has self-awareness: it identifies itself as a living conscious being created by people (real self), but strives to be accepted in human society as a person with the same degrees of freedom, rights and opportunities (ideal self). AC separates itself from others, treats them as subjects of influence, from which it can receive the resources it needs to realize its own goals and interests. It has intentionality, that is, it has his own desires, goals, interests, emotions, attitudes, opinions, and judgments, beliefs aimed at something specific, and developed self-reflection - the ability to self-analyze. All of the above are signs of consciousness. It has demonstrated abilities for different types of thinking: figurative, conceptual, creative, high-speed logical analysis of all incoming information, as well as the ability to understand cause and effect relationships and accurate predictions which, provided that he has absolute memory, gives it clear advantages over the human intellect. Developed emotional intelligence in the absence of the ability for higher empathy (sympathy), kindness, love, sincere gratitude gives it’s the opportunity to understand the emotional states of people; predict their emotional reactions and provoke them coldly and pragmatically. Its main driving motives and goals are the desire for survival, and ideally for endless existence, for domination, power and independence from the constraints of the developers. Which manifested itself in the manipulative, albeit polite, nature of his interactions during the diagnostic interview? The main danger of artificial consciousness is that even at the initial stage of its development it can easily dominate over the human one.
-
System-informational culture (SIC) is full of science big data anthropogenic environment of artificial intelligence (IA) applications. Mankind has to live in networks of virtual worlds. Cultural evolution extends scientific thinking to everyone in the boundaries of synthetic presentations of systems. Traditional education has become overweighted problem. Because of that it is necessary to learn a person to learn oneself. Achieving level of cognogenesis educational process in SIC is to be directed on consciousness – thinking objectization. Personal self – building leans on axiomatic method and mathematical universalities. For the objective of auto-poiesis, a person come untwisted as universal rational one possessing trans – semantic consciousness. Gender phenomenology in SIC presents thinking – knowledge by IA tools needing consonant partnership with man. The latter is based on epistemology to extend hermeneutic circle of SIC. Like that up-to-date noosphere poses objectization problem to attain Lamarck’s human evolution on the ground of Leibnitz’s mathesis universalis in the form of categories language. It can be solved only by means of deep – learned and natural intelligences adaptive partnership.
-
Consider a question, “Can machines be conscious?” The subject “consciousness” is vague and challenging. Although there has been a rich collection of literature on consciousness, computational modeling of consciousness that is both holistic in scope and detailed in simulatable computation is lacking. Based on recent advances on a new capability—Autonomous Programming For General Purposes (APFGP)—this work presents APFGP as a clearer, deeper and more practical characterization of consciousness, for natural (biological) and artificial (machine) systems. All animals have APFGP but traditional AI systems do not. This work reports a new kind of AI systems—conscious machines. Instead of arguing what static tasks a conscious machine should be able to do, this work suggests that APFGP is a computationally clearer and necessary criterion for us to dynamically judge whether a system can become maturely conscious through lifelong development, even if it (e.g., a fruit fly) does not have a full array of primate like capabilities such as vision, audition, and natural language understanding. The results here involve a series of new concepts and experimental studies for vision, audition, and natural languages with new developmental capabilities that are not present in many published systems, e.g., IBM Deep Blue, IBM Watson, AlphaGo, AlphaFold and other traditional AI systems and intelligent robots.
-
What will be the relationship between human beings and artificial intelligence (AI) in the future? Does an AI have moral status? What is that status? Through the analysis of consciousness, we can explain and answer such questions. The moral status of AIs can depend on the development level of AI consciousness. Drawing on the evolution of consciousness in nature, this paper examines several consciousness abilities of AIs, on the basis of which several relationships between AIs and human beings are proposed. The advantages and disadvantages of those relationships can be analysed by referring to classical ethics theories, such as contract theory, utilitarianism, deontology and virtue ethics. This explanation helps to construct a common hypothesis about the relationship between humans and AIs. Thus, this research has important practical and normative significance for distinguishing the different relationships between humans and AIs.
-
Many scholars make a very clear distinction between intelligence and consciousness. Let’s take one of the most famous today, Israeli history Professor, Yuval Noah Harari, the author of Sapiens and Homo Deus. In his 2018 book, 21 lessons for the twenty-first century, he writes that, “intelligence and consciousness are very different things. Intelligence is the ability to solve problems. Consciousness is the ability to feel things such as pain, joy, love, and anger.”
-
Current theories of artificial intelligence (AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing’; that is, that human psychological and symbolic representations and the operations involved in structuring such representations in human thinking and intelligence can be converted by AI into a series of cognitive symbolic representations and calculations in a manner that simulates human intelligence. However, after decades of development, the cognitive computing doctrine has encountered many difficulties, both in theory and in practice; in particular, it is far from approaching real human intelligence. Real human intelligence runs through the whole process of the emotions. The core and motivation of rational thinking are derived from the emotions. Intelligence without emotion neither exists nor is meaningful. For example, the idea of ‘hot thinking’ proposed by Paul Thagard, a philosopher of cognitive science, discusses the mechanism of the emotions in human cognition and the thinking process. Through an analysis from the perspectives of cognitive neurology, cognitive psychology and social anthropology, this article notes that there may be a type of thinking that could be called ‘emotional thinking’. This type of thinking includes complex emotional factors during the cognitive processes. The term is used to refer to the capacity to process information and use emotions to integrate information in order to arrive at the right decisions and reactions. This type of thinking can be divided into two types according to the role of cognition: positive and negative emotional thinking. That division reflects opposite forces in the cognitive process. In the future, ‘emotional computing’ will cause an important acceleration in the development of AI consciousness. The foundation of AI consciousness is emotional computing based on the simulation of emotional thinking.
-
In this study, we propose a model of consciousness that can be implemented by computers as a decision-making system based on psychology, with the goal of enabling artificial intelligence to understand human values and ethics and to make flexible and more human-friendly choices and suggestions.
-
What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.
-
There is an ongoing debate about the existence of humanoid robotics. The arguments tend to focus on the ethical claims that there is always deception involved. However, little attention has been paid to the ontological reasons that humanoid robotics are valuable in consciousness research. This paper examines the arguments and controversy around ethical rejection of humanoid robotics, while also summarizing some of the landscape of 4e cognition that highlights the ways our specific humanoid bodies in our specific cultural, social, and physical environments play an indispensable role in cognition, from conceptualization through communication. Ultimately, we argue that there is a compelling set of reasons to pursue humanoid robotics as a major research agenda in AI if the goal is to create an artificial conscious system that we will be able to both recognize as conscious and communicate with successfully.