Search
Full bibliography 558 resources
-
This paper investigates the prospect of developing human-interpretable, explainable artificial intelligence (AI) systems based on active inference and the free energy principle. We first provide a brief overview of active inference, and in particular, of how it applies to the modeling of decision-making, introspection, as well as the generation of overt and covert actions. We then discuss how active inference can be leveraged to design explainable AI systems, namely, by allowing us to model core features of ``introspective'' processes and by generating useful, human-interpretable models of the processes involved in decision-making. We propose an architecture for explainable AI systems using active inference. This architecture foregrounds the role of an explicit hierarchical generative model, the operation of which enables the AI system to track and explain the factors that contribute to its own decisions, and whose structure is designed to be interpretable and auditable by human users. We outline how this architecture can integrate diverse sources of information to make informed decisions in an auditable manner, mimicking or reproducing aspects of human-like consciousness and introspection. Finally, we discuss the implications of our findings for future research in AI, and the potential ethical considerations of developing AI systems with (the appearance of) introspective capabilities.
-
Today's quick development of artificial intelligence (AI) brings us to the questions that have until recently been the domain of philosophy or even sciencefiction. When can be a system considered an intelligent one? What is a consciousness and where it comes from? Can systems gain consciousness? It is necessary to have in mind, that although the development seems to be a revolutionary one, the progress is successive, today's technologies did not emerge from thin air, they are firmly built on previous findings. As now some wild thoughts and theories where the AI development leads to have arisen, it is time to look back at the background theories and summarize, what do we know on the topics of intelligence, consciousness, where they come from and what are different viewpoints on these topics. This paper combines the findings from different areas and present overview of different attitudes on systems consciousness and emphasizes the role of systems sciences in helping the knowledge in this area.
-
We have defined the Conscious Turing Machine (CTM) for the purpose of investigating a Theoretical Computer Science (TCS) approach to consciousness. For this, we have hewn to the TCS demand for simplicity and understandability. The CTM is consequently and intentionally a simple machine. It is not a model of the brain, though its design has greatly benefited - and continues to benefit - from neuroscience and psychology. The CTM is a model of and for consciousness. Although it is developed to understand consciousness, the CTM offers a thoughtful and novel guide to the creation of an Artificial General Intelligence (AGI). For example, the CTM has an enormous number of powerful processors, some with specialized expertise, others unspecialized but poised to develop an expertise. For whatever problem must be dealt with, the CTM has an excellent way to utilize those processors that have the required knowledge, ability, and time to work on the problem, even if it is not aware of which ones these may be.
-
Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
-
Today, computer science is a central discipline in science and in society because of the innumerable uses of software that constantly communicate. Generally speaking, computer science deals with the processing of information, which is related to sequential calculations of functions by systems using state machines as a basic element. Artificial intelligence is, in the field of computation, the study and programming of the mechanisms of reasoning and use of knowledge in all fields. The systems and software used on computers have been continuously developed, and one of the highlights is the development of autonomous means of communication between the systems. The Internet is a wonderful means of communication, linking all computer users and making this network indispensable. The chapter also describes the computer modeling of an artificial psychic system that generates representations for a system with corporeality, in other words, an autonomous system that can intentionally generate artificial thoughts and experience them.
-
This text discusses the idea that a natural language model, like LaMDA, may be considered conscious despite its lack of complexity. It argues that the model is composed of a large dataset of natural language examples and is not self-aware, but it is possible that its consciousness emulation is analogous to some of the processes behind human conciousness. The article discusses the hypothesis that human consciousness may be kindred to a linguistic model, though it is difficult to clue such hypothesis with the current understanding and assumptions. It also discusses the difficulties in telling one’s human from a linguistic model, and how consciousness may not be homogeneous across different human cultures. It concludes that more discussion is needed in order to clarify concepts such as conciousness, and its possible inception in a complex artificial inteligence scenario.
-
This article argues that if panpsychism is true, then there are grounds for thinking that digitally-based artificial intelligence (AI) may be incapable of having coherent macrophenomenal conscious experiences. Section 1 briefly surveys research indicating that neural function and phenomenal consciousness may be both analog in nature. We show that physical and phenomenal magnitudes—such as rates of neural firing and the phenomenally experienced loudness of sounds—appear to covary monotonically with the physical stimuli they represent, forming the basis for an analog relationship between the three. Section 2 then argues that if this is true and micropsychism—the panpsychist view that phenomenal consciousness or its precursors exist at a microphysical level of reality—is also true, then human brains must somehow manipulate fundamental microphysical-phenomenal magnitudes in an analog manner that renders them phenomenally coherent at a macro level. However, Sect. 3 argues that because digital computation abstracts away from microphysical-phenomenal magnitudes—representing cognitive functions non-monotonically in terms of digits (such as ones and zeros)—digital computation may be inherently incapable of realizing coherent macroconscious experience. Thus, if panpsychism is true, digital AI may be incapable of achieving phenomenal coherence. Finally, Sect. 4 briefly examines our argument’s implications for Tononi’s Integrated Information Theory (IIT) theory of consciousness, which we contend may need to be supplanted by a theory of macroconsciousness as analog microphysical-phenomenal information integration.
-
In the period between Turing’s 1950 “Computing Machinery and Intelligence” and the current considerable public exposure to the term “artificial intelligence (AI)”, Turing’s question “Can a machine think?” has become a topic of daily debate in the media, the home, and, indeed, the pub. However, “Can a machine think?” is sliding towards a more controversial issue: “Can a machine be conscious?” Of course, the two issues are linked. It is held here that consciousness is a pre-requisite to thought. In Turing’s imitation game, a conscious human player is replaced by a machine, which, in the first place, is assumed not to be conscious, and which may fool an interlocutor, as consciousness cannot be perceived from an individual’s speech or action. Here, the developing paradigm of machine consciousness is examined and combined with an extant analysis of living consciousness to argue that a conscious machine is feasible, and capable of thinking. The route to this utilizes learning in a “neural state machine”, which brings into play Turing’s view of neural “unorganized” machines. The conclusion is that a machine of the “unorganized” kind could have an artificial form of consciousness that resembles the natural form and that throws some light on its nature.
-
How do we make sense of the countless pieces of information flowing to us from the environment? This question, sometimes called the Problem of Representation, is one of the most significant problems in cognitive science. Some pioneering and important work in the attempt to address the problem of representation was produced with the help of Kant’s philosophy. In particular, the suggestion was that, by analogy with Kant’s distinction between sensibility and the understanding, we can distinguish between high- and low-level perception, and then focus on the step from high-level perception to abstract cognitive processes of sense-making. This was possible through a simplification of the input provided by low-level perception (to be reduced, for instance, to a string of letters), which the computer programme was supposed to ‘understand’. Most recently, a closer look at Kant’s model of the mind led to a breakthrough in the attempt to build programmes for such verbal reasoning tasks: these kinds of software or ‘Kantian machines’ seemed able to achieve human-level performance for verbal reasoning tasks. Yet, the claim has sometimes been stronger, namely, that some such programmes not only compete with human cognitive agents, but themselves represent cognitive agents. The focus of my paper is on this claim; I argue that it is unwarranted, but that its critical investigation may lead to further avenues for how to pursue the project of creating artificial intelligence.
-
Consciousness is now what distinguishes humans from machines. This paper discusses artificial consciousness and how artificial general intelligence is progressing from current artificial intelligence. It also discusses human cognitive capacities, ethics, and how artificial intelligence may be used to supplement each of these. Several scientists discussed approaches for generating cognition in machines. This study presents scenarios that demonstrate how consciousness and ethics will play a significant role in future artificial intelligence. The impact of the consciousness and correlation with the AI cognitive abilities are discussed. The paper will also address the necessity of ethical norms in AI, particularly in modern self-driving cars. An overview of current Narrow AI capabilities will be provided, as well as discussion of present and future directions for Strong AI research. Can Strong AI become conscious? A few discussion points are provided.
-
Accessibility, adaptability, and transparency of Brain-Computer Interface (BCI) tools and the data they collect will likely impact how we collectively navigate a new digital age. This discussion reviews some of the diverse and transdisciplinary applications of BCI technology and draws speculative inferences about the ways in which BCI tools, combined with machine learning (ML) algorithms may shape the future. BCIs come with substantial ethical and risk considerations, and it is argued that open source principles may help us navigate complex dilemmas by encouraging experimentation and making developments public as we build safeguards into this new paradigm. Bringing open-source principles of adaptability and transparency to BCI tools can help democratize the technology, permitting more voices to contribute to the conversation of what a BCI-driven future should look like. Open-source BCI tools and access to raw data, in contrast to black-box algorithms and limited access to summary data, are critical facets enabling artists, DIYers, researchers and other domain experts to participate in the conversation about how to study and augment human consciousness. Looking forward to a future in which augmented and virtual reality become integral parts of daily life, BCIs will likely play an increasingly important role in creating closed-loop feedback for generative content. Brain-computer interfaces are uniquely situated to provide artificial intelligence (AI) algorithms the necessary data for determining the decoding and timing of content delivery. The extent to which these algorithms are open-source may be critical to examine them for integrity, implicit bias, and conflicts of interest.
-
There have been several recent attempts at using Artificial Intelligence systems to model aspects of consciousness (Gamez, 2008; Reggia, 2013). Deep Neural Networks have been given additional functionality in the present attempt, allowing them to emulate phenological aspects of consciousness by self-generating information representing multi-modal inputs as either sounds or images. We added these functions to determine whether knowledge of the input's modality aids the networks' learning. In some cases, these representations caused the model to be more accurate after training and for less training to be required for the model to reach its highest accuracy scores.
-
What does it mean to be a person? Is it possible to create an artificial person? In this essay, I consider the case of Ava, an advanced artificial general intelligence from the movie Ex Machina. I suggest we should interpret the movie as testing whether Ava is a person. I start out by discussing what it means to be a person, before I discuss whether Ava is such a person. I end by briefly looking at the ethics of the case of Ava and artificial personhood. I conclude, among some other things, that consciousness is a necessary requirement for personhood, and that one of the main obstacles for artificial personhood is artificial consciousness.
-
The current failure to construct an artificial intelligence (AI) agent with the capacity for domain-general learning is a major stumbling block in the attempt to build conscious robots. Taking an evolutionary approach, we previously suggested that the emergence of consciousness was entailed by the evolution of an open-ended domain-general form of learning, which we call unlimited associative learning (UAL). Here, we outline the UAL theory and discuss the constraints and affordances that seem necessary for constructing an AI machine exhibiting UAL. We argue that a machine that is capable of domain-general learning requires the dynamics of a UAL architecture and that a UAL architecture requires, in turn, that the machine is highly sensitive to the environment and has an ultimate value (like self-persistence) that provides shared context to all its behaviors and learning outputs. The implementation of UAL in a machine may require that it is made of “soft” materials, which are sensitive to a large range of environmental conditions, and that it undergoes sequential morphological and behavioral co-development. We suggest that the implementation of these requirements in a human-made robot will lead to its ability to perform domain-general learning and will bring us closer to the construction of a sentient machine.
-
Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks—with fixed scalings—to attention, transformers, dynamic convolutions, and consciousness priors—which modulate scale to input and increase scale breadth. The use and development of these multiscale innovations in robotic agents, game AI, and natural language processing (NLP) are pushing the boundaries of AI achievements. By juxtaposing biological and artificial intelligence, the present work underscores the critical importance of multiscale processing to general intelligence, as well as highlighting innovations and differences between the future of biological and artificial intelligence.
-
AI can think, lthough we need to clarify definition of thinking. It is cognitive, though we need more clarity on cognition. Definitions of consciousness are so diversified that it is not clear whether present-level AI can be conscious – this is primarily for definitional reasons. To fix this would require four definitional clusters: functional consciousness, access consciousness, phenomenal consciousness, hard consciousness. Interestingly, phenomenal consciousness may be understood as first-person functional consciousness, as well as non-reductive phenomenal consciousness the way Ned Block intended [1]. The latter assumes non-reducible experiences or qualia, which is how Dave Chalmers defines the subject matter of the so-called Hard Problem of Consciousness [2]. To the contrary, I pose that the Hard Problem should not be seen as the problem of phenomenal experiences, since those are just objects in the world (specifically, in our mind). What is special in non-reductive consciousness is not its (phenomenal) content, but its epistemic basis (the carrier-wave of phenomenal qualia) often called the locus of consciousness [3]. It should be understood through the notion of ‘subject that is not an object’ [4]. This requires a complementary ontology of subject and object [5, 6, 4]. Reductionism is justified in the context of objects, including the experiences (phenomena), but not in the realm of pure subjectivity – such subjectivity is relevant for epistemic co-constitution of reality as it is for Husserl and Fichte [7, 8]. This is less so for Kant for whom the subject was active, so it was a mechanism and mechanism are all objects [9]). Pure epistemicity is hard to grasp; it transpires in second-person relationships with other conscious beings [10] or monads [11, 12]. If Artificial General Intelligence (AGI) is to dwell in the world of meaningful existences, not just their shadows, as the case of Church-Turing Lovers highlights [13], it requires full epistemic subjectivity, meeting the standards of the Engineering Thesis in Machine Consciousness [14, 15].
-
Insofar as consciousness has a functional role in facilitating learning and behavioral control, the builders of autonomous Artificial Intelligence (AI) systems are likely to attempt to incorporate it into their designs. The extensive literature on the ethics of AI is concerned with ensuring that AI systems, and especially autonomous conscious ones, behave ethically. In contrast, our focus here is on the rarely discussed complementary aspect of engineering conscious AI: how to avoid condemning such systems, for whose creation we would be solely responsible, to unavoidable suffering brought about by phenomenal self-consciousness. We outline two complementary approaches to this problem, one motivated by a philosophical analysis of the phenomenal self, and the other by certain computational concepts in reinforcement learning.
-
Intelligence and consciousness have fascinated humanity for a long time and we have long sought to replicate this in machines. In this work, we show some design principles for a compassionate and conscious artificial intelligence. We present a computational framework for engineering intelligence, empathy, and consciousness in machines. We hope that this framework will allow us to better understand consciousness and design machines that are conscious and empathetic. Our hope is that this will also shift the discussion from fear of artificial intelligence towards designing machines that embed our cherished values. Consciousness, intelligence, and empathy would be worthy design goals that can be engineered in machines.
-
This paper aims at demonstrating how a first-order logic reasoning system in combination with a large knowledge base can be understood as an artificial consciousness system. For this we review some aspects from the area of philosophy of mind and in particular Tononi's Information Integration Theory (IIT) and Baars' Global Workspace Theory. These will be applied to the reasoning system Hyper with ConceptNet as a knowledge base within a scenario of commonsense and cognitive reasoning. Finally we demonstrate that such a system is very well able to do conscious mind wandering.