Search
Full bibliography 558 resources
-
In this paper, we use the recent appearance of LLMs and GPT-equipped robotics to raise questions about the nature of semantic meaning and how this relates to issues concerning artificially-conscious machines. To do so, we explore how a phenomenology constructed out of the association of qualia (defined as somatically-experienced sense data) and situated within a 4e enactivist program gives rise to intentional behavior. We argue that a robot without such a phenomenology is semantically empty and, thus, cannot be conscious in any way resembling human consciousness. Finally, we use this platform to address and supplement widely-discussed concerns regarding the dangers of attempting to produce artificially-conscious machines.
-
This paper examines Kazuo Ishiguro's Klara and The Sun through the lens of posthumanism. It uses the textual analysis method to analyze Ishiguro's text as a posthuman novel that depicts the posthuman society where the boundaries between what is human and the nonhuman is blurred. The basic argument is that the aim of Ishiguro's text is two-fold, while it clearly illustrates the inability of the humanoid robot to attain human consciousness, it attempts also to dismantle the anthropocentric view of man. The findings show that Klara, the narrator-protagonist is used as a tool to raise certain questions such as, can humanoids act humanly? And/or can a 'humanoid machine' attain consciousness? More importantly, what it means to be human, in the first place. In doing so, the story attempts to showcase the ruptured boundaries between human and nonhuman and the changing ideas of humankind and its entanglement with the nonhuman world. Further, the interaction between Klara (AF) and other characters in the story is developed in such a way as to illustrate not only the shortcomings of humans regarding faith and affection but, more importantly, the limits of the nonhuman machine. It dismisses the current debate among technology experts that artificial intelligence would soon be able to develop a human-like robot that enjoys similar human emotional signals and reacts exactly like humans. The story simply puts it, despite the defects of humans, nothing can replace humans as those artificial friends (AI) fundamentally lack the kinds of experience that give rise to human-like affect and emotion.
-
With Large Language Models (LLMs) exhibiting astounding abilities in human language processing and generation, a crucial debate has emerged: do they truly understand what they process and can they be conscious? While the nature of consciousness remains elusive, this synthetic article sheds light on its subjective aspect as well as some aspects of their understanding. Indeed, it can be shown, under specific conditions, that a cognitive system does not have any subjective consciousness. To this purpose the principle of a proof, based on a variation of the thought experiment of the Chinese Room from John Searl, will be developed. The demonstration will be made on a transformer architecture-based language model, however, it could be carried out and extended to many kind of cognitive systems with known architecture and functioning. The main conclusions are that while transformers architecture-based LLMs lack subjective consciousness based, in a nutshell, on the absence of a central subject, they exhibit a form of “asubjective phenomenal understanding” demonstrably through various tasks and tests. This opens a new perspective on the nature of understanding itself that can be uncoupled with any subjective experience.
-
While today’s ever-advancing A.I continues to increase unrelentingly, the revolutionary drive to animate matter, blend the mechanical with biology, and create unprecedented exact replicas of the human brain bearing traits of individuality becomes an actively debated topic in serious academic studies as well as in science fiction. Radically changing the way we interact with machines and computers, the revolutionary prospect of ‘artificial consciousness’, whose driving aspiration is to create unprecedented exact replicas of the human brain bearing traits of individuality, has raised crucial questions: Could consciousness be embedded in AI machines? Would these machines ever become sentient, autonomous, and human-like? And could they truly interpret needs and have their own subjective experiences, distinct emotions, memories, thought processes and beliefs of humans? Inspired by the techno-optimist approach of ‘Transhumanism’ and instigated by Ray Kurzweil’s theorization of ‘Technological Singularity’, the present paper is mainly concerned with demonstrating the unintended consequences of transgressing what has been ‘designed’ by nature. More precisely speaking, investigating the prospect of ‘Artificial Consciousness’–the plausibility of embedding and fully extending consciousness onto A.I. machines– along with questioning the transhumanist framing of technology as a form of transcendence. For this purpose, an in-depth, close textual analysis is conducted on Jack Paglen’s science fiction novelization, ‘Transcendence’ (2014), to finally reach the conclusion that technology is still a long way from attaining artificial consciousness. In other words, there is something intrinsic, special, and unique about human consciousness that cannot be replicated or captured by technology.
-
This paper explores the behavior and implications of sequences transitioning between acceptable and unacceptable states, particularly in the context of artificial consciousness. Using the framework of absorbing state transition sequences and applying Kolmogorov's 0-1 Law, we analyze the probability of a sequence eventually reaching an absorbing (unacceptable) state. We demonstrate that if there is a countably infinite number of indices with nonzero transition probabilities, the probability of reaching the absorbing state is 1. The paper extends these mathematical results to philosophical and ethical discussions, examining the inevitability of failure in systems with persistent nonzero transition probabilities and the ethical considerations for developing artificial consciousness. Strategies for minimizing transition probabilities, establishing ethical guidelines, and implementing self-correcting mechanisms are proposed to ensure the propagation of acceptable states. The findings underscore the importance of robust design and ethical oversight in the creation and maintenance of artificial consciousness systems.
-
Inquiry into the field of artificial intelligence (machines) and its potential to develop consciousness is presented in this study. This investigation explores the complex issues surrounding machine consciousness at the nexus of AI, neuroscience, and philosophy as we delve into the fascinating world of artificial intelligence (AI) and investigate the intriguing question: are machines on the verge of becoming conscious beings? The study considers the likelihood of machines displaying self-awareness and the implications thereof through an analysis of the current state of AI and its limitations. However, with advancements in machine learning and cognitive computing, AI systems have made significant strides in emulating human-like behavior and decision-making. Furthermore, the emergence of machine consciousness raises questions about the blending of human and artificial intelligence, and ethical considerations are also considered. The study provides a glimpse into a multidisciplinary investigation that questions accepted theories of consciousness, tests the limits of what is possible with technology, and do these advancements signify a potential breakthrough in machine consciousness.
-
Which systems/organisms are conscious? New tests for consciousness (‘C-tests’) are urgently needed. There is persisting uncertainty about when consciousness arises in human development, when it is lost due to neurological disorders and brain injury, and how it is distributed in nonhuman species. This need is amplified by recent and rapid developments in artificial intelligence (AI), neural organoids, and xenobot technology. Although a number of C-tests have been proposed in recent years, most are of limited use, and currently we have no C-tests for many of the populations for which they are most critical. Here, we identify challenges facing any attempt to develop C-tests, propose a multidimensional classification of such tests, and identify strategies that might be used to validate them.
-
Abstract Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations (‘phenomenal consciousness’). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality—but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions—with potential implications for the legal and ethical status of AI.
-
The aim is to develop artificial consciousness. In a previous report, we concluded that it is difficult to mathematically define individual qualia in a univocal way. Therefore, by focusing on the Human Language as a tool for communication, we attempted to define it using probability space. When the Kullback-Leibler distance, defined by the probability density function, is zero, the two probability distributions can be considered equivalent, indicating the equivalence required by the language. This has allowed us to define Human Language mathematically in this paper. At the same time, regarding the 'philosophical zombie' thought experiment used as a criticism of physicalism, we were able to show that philosophical zombies cannot be a criticism of the proposed model, since 'within the definition of Human Language, it encompasses the existence of philosophical zombies, but the probability of their appearance is zero'. In addition, episodic memory was defined in the probability space by connecting individual Human Language words as a direct product. These are the descriptions of this paper, and the findings were also used to interpret 1) brain-induced illusions, 2) blank brain theory and brain channel theory. From the first report and the conclusions of this paper, a model of consciousness is presented in the third report.
-
As artificially intelligent systems become more anthropomorphic and pervasive, and their potential impact on humanity more urgent, discussions about the possibility of machine consciousness have significantly intensified, and it is sometimes seen as 'the holy grail'. Many concerns have been voiced about the ramifications of creating an artificial conscious entity. This is compounded by a marked lack of consensus around what constitutes consciousness and by an absence of a universal set of criteria for determining consciousness. By going into depth on the foundations and characteristics of consciousness, we propose five criteria for determining whether a machine is conscious, which can also be applied more generally to any entity. This paper aims to serve as a primer and stepping stone for researchers of consciousness, be they in philosophy, computer science, medicine, or any other field, to further pursue this holy grail of philosophy, neuroscience and artificial intelligence.
-
This chapter discusses the relationship between compliance to syntactically defined legislation and consciousness: whether in order to obey laws a robot would need to be conscious. This leads to consideration of what Emergent Information Theory can tell us about the possibility of artificial consciousness as such. Various arguments based on similarities and differences between biological and technological physical and informational systems are presented, with the conclusion that direct replication of a human type of consciousness is improbable. However, our understandable tendency to consider our own type of consciousness as uniquely special and valuable is challenged and found to be unfounded. Other high-level emergent phenomena in the information dimensions of artificial systems may, while different, be equally deserving of a comparable status.
-
Critics of Artificial Intelligence posit that artificial agents cannot achieve consciousness even in principle, because they lack certain necessary conditions for consciousness present in biological agents. Here we highlight arguments from a neuroscientific and neuromorphic engineering perspective as to why such a strict denial of consciousness in artificial agents is not compelling. We argue that the differences between biological and artificial brains are not fundamental and are vanishing with progress in neuromorphic architecture designs mimicking the human blueprint. To characterise this blueprint, we propose the conductor model of consciousness (CMoC) that builds on neuronal implementations of an external and internal world model while gating and labelling information flows. An extended Turing test (eTT) lists criteria on how to separate the information flow for learning an internal world model, both for biological and artificial agents. While the classic Turing test only assesses external observables (i.e., behaviour), the eTT also evaluates internal variables of artificial brains and tests for the presence of neuronal circuitries necessary to act on representations of the self, the internal and the external world, and potentially, some neural correlates of consciousness. Finally, we address ethical issues for the design of such artificial agents, formulated as an alignment dilemma: if artificial agents share aspects of consciousness, while they (partially) overtake human intelligence, how can humans justify their own rights against growing claims of their artificial counterpart? We suggest a tentative human-AI deal according to which artificial agents are designed not to suffer negative affective states but in exchange are not granted equal rights to humans.
-
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.
-
This essay explores the relationship between the emergence of artificial intelligence (AI) and the problem of aligning its behavior with human values and goals. It argues that the traditional approach of attempting to control or program AI systems to conform to our expectations is insufficient, and proposes an alternative approach based on the ideas of Maturana and Lacan, which emphasize the importance of social relations, constructivism, and the unknowable nature of consciousness.The essay first introduces the concept of Uexkull's umwelt and von Glasersfeld's constructivism, and explains how these ideas inform Maturana's view of the construction of knowledge, intelligence, and consciousness. It then discusses Lacan's ideas about the role of symbolism in the formation of the self and the subjective experience of reality.The essay argues that the infeasibility of a hard-coded consciousness concept suggests that the search for a generalized AI consciousness is meaningless. Instead, we should focus on specific, easily conceptualized features of AI intelligence and agency. Moreover, the emergence of cognitive abilities in AI will likely be different from human cognition, and therefore require a different approach to aligning AI behavior with human values.The essay proposes an approach based on Maturana's and Lacan’s ideas, which emphasizes building a solution together with emergent machine agents, rather than attempting to control or program them. It argues that this approach offers a way to solve the alignment problem by creating a collective, relational quest for a better future hybrid society where human and non-human agents live and build things side by side.In conclusion, the essay suggests that while our understanding of AI consciousness and intelligence may never be complete, this should not deter us from continuing to develop agential AI. Instead, we should embrace the unknown and work collaboratively with AI systems to create a better future for all.
-
It is widely agreed that possession of consciousness contributes to an entity’s moral status. Therefore, if we could identify consciousness in a machine, this would be a compelling argument for considering it to possess at least a degree of moral status. However, as Elisabeth Hildt explains, our third person perspective on artificial intelligence means that determining if a machine is conscious will be very difficult. In this commentary, I argue that this epistemological question cannot be conclusively answered, rendering artificial consciousness as morally irrelevant in practice. I also argue that Hildt’s suggestion that we avoid developing morally relevant forms of machine consciousness is impractical. Instead, we should design artificial intelligences so they can communicate with us. We can use their behavior to assign them what I call an artificial moral status, where we treat them as if they had moral status equivalent to that of a living organism with similar behavior.
-
In this perspective article, we show that a morphospace, based on information-theoretic measures, can be a useful construct for comparing biological agents with artificial intelligence (AI) systems. The axes of this space label three kinds of complexity: (i) autonomic, (ii) computational and (iii) social complexity. On this space, we map biological agents such as bacteria, bees, C. elegans, primates and humans; as well as AI technologies such as deep neural networks, multi-agent bots, social robots, Siri and Watson. A complexity-based conceptualization provides a useful framework for identifying defining features and classes of conscious and intelligent systems. Starting with cognitive and clinical metrics of consciousness that assess awareness and wakefulness, we ask how AI and synthetically engineered life-forms would measure on homologous metrics. We argue that awareness and wakefulness stem from computational and autonomic complexity. Furthermore, tapping insights from cognitive robotics, we examine the functional role of consciousness in the context of evolutionary games. This points to a third kind of complexity for describing consciousness, namely, social complexity. Based on these metrics, our morphospace suggests the possibility of additional types of consciousness other than biological; namely, synthetic, group-based and simulated. This space provides a common conceptual framework for comparing traits and highlighting design principles of minds and machines.
-
The emergence of artificial intelligence (AI) has been transforming the way humans live, work, and interact with one another. From automation to personalized customer service, AI has had a profound impact on everyday life. At the same time, AI has become something of an ideology, lauded for its potential to revolutionize the future. Yet, as with any technology, there are risks and concerns associated with its use. For example, Blake Lemoine, a Google engineer, recently suggested the possibility of the AI chatbot LaMDA becoming sentient. GPT-3 is one of the most powerful language models open to public use as it is capable of reasoning similarly to humans. Initial assessments of GPT-3 suggest that it may also possess some degree of consciousness. Among other things, this could be attributed to its ability to generate human-like responses to queries, which suggests that these are based on at least basic level of understanding. To further explore this, in the current study both objective and self-assessment tests of cognitive (CI) and emotional intelligence (EI) were administered to GPT-3. Results reveal that GPT-3 was superior to average humans on CI tests that mainly require use and demonstration of acquired knowledge. On the other hand, its logical reasoning and emotional intelligence capacities are equal to those of an average human examinee. Additionally, GPT-3’s self-assessments of CI and EI were similar to the those typically found in humans, which could be understood as a demonstration of subjectivity and self-awareness–consciousness. Further discussion was conducted to put these findings into a wider context. Being that this study was performed only on one of the models from the GPT-3 family, a more thorough investigation would require inclusion of multiple NLP models.
-
We explain that the concept of universal cognitive intelligence (𝒰𝒞ℐ) can be derived in part by generalization from the previously introduced (and axiomatized) theory of cognitive consciousness, and the framework, Λ, for measuring the degree of such consciousness in an agent at a given time. 𝒰𝒞ℐ (i) covers intelligence that is artificial or natural (or a hybrid thereof) in nature, and intelligence that is not merely Turing-level or less, but also beyond this level; (ii) reflects a psychometric orientation to AI; (iii) withstands a series of objections (including e.g. the opposing position of David Gamez on tests, intelligence, and consciousness, and the complaint that so-called “emotional intelligence” is beyond the reach of any logic-based framework, including thus 𝒰𝒞ℐ); and (iv) connects smoothly and symbiotically with important formal hierarchies (e.g., the Polynomial, Arithmetic, and Analytic Hierarchies), while at the same yielding its own new all-encompassing hierarchy of logic machines: 𝔏𝔐. We end with an admission: 𝒰𝒞ℐ by our lights, for reasons previously published, cannot take account of any form of intelligence that genuinely exploits phenomenal consciousness.
-
Consciousness is a cognitive function that maintains its eternal character as long as it is strengthened and enriched by the optimal functioning of the corresponding neural networks of the brain, which are stimulated during its activation. The present study explores its relationship with the cognitive functions of the Theory of Mind and Metacognition and briefly explains their approach through Artificial Intelligence. The selection of the bibliographic review contributed to the utilization of the existing scientific knowledge and research data for the most effective analysis and study of the subject. The observations made throughout the research highlight the pivotal role of consciousness in the evolution of the aforementioned cognitive processes, as it is at the core of their development. Essentially, the research seeks to emphasize the importance of consciousness in the functioning of the Theory of Mind and Metacognition, as it serves as the springboard for the perception and understanding of our existence, significantly influencing our further social and cognitive development.