Search
Full bibliography 558 resources
-
The accelerating advances in the fields of neuroscience, artificial intelligence, and robotics have been garnering interest and raising new philosophical, ethical, or practical questions that depend on whether or not there may exist a scientific method of probing consciousness in machines. This paper provides an analytic review of the existing tests for machine consciousness proposed in the academic literature over the past decade, and an overview of the diverse scientific communities involved in this enterprise. The tests put forward in their work typically fall under one of two grand categories: architecture (the presence of consciousness is inferred from the correct implementation of a relevant architecture) and behaviour (the presence of consciousness is deduced by observing a specific behaviour). Each category has its strengths and weaknesses. Architecture tests' main advantage is that they could apparently test for qualia, a feature that has been receiving increasing attention in recent years. Behaviour tests are more synthetic and more practicable, but give a stronger role to ex post human interpretation of behaviour. We show how some disciplines and places have affinities towards certain type of tests, and which tests are more influential according to scientometric indicators.
-
ABSTRACT My paper talks about post-human spaces and technological afterness associated with the physiognomy of humans. Mechanical alteration in biological mechanisms is directly experienced in seizing of organic consciousness. The rupture in consciousness splits it into two distinct parts-one belonging to the disappearing human, the other to the emerging cybernetic. The new being is not another human, but (an)other human, an evolved different sameness. In the film Realive (2016) we encounter an extension of the self beyond death by re-placing it into another body. However, this enhancement diffuses all ‘natural’ responses and meaning-making vehicles, primarily the cognizance of death and mortality. In a classic Frankensteinian restoration, Marc is reanimated in 2084 through extensive methods of cryonization under the banner ‘Lazarus project’. The post-human ‘humachines’ dissolve the position of the teleological man and stretch DNA to digitality. Upgrade (2018) shows us the metamorphosis of Grey Trace, a luddite, by an installed biomechanical enhancer chip, Stem. The roach-like implant not only erases Grey’s quadriplegic body, but ironically ‘desires’ to possess and manoeuver the host’s body. Robotic consciousness in these assimilated after-humans is borrowed consciousness activated by infusing the evanescent biological particle - life. Nanotechnology, molecular machines, nerve manipulators, cameras implanted inside the brain, self-generating nanobots, artificial mechanical limbs have emerged as elements of posthuman utopia/dystopia. Paradoxically, in both the films the protagonists, after their reanimation and upgradation, try to return to their original position of death and disability. In their quest to retrieve the lived body they lose their embodied reciprocations with animals, machines and other forms of life. The mysterious, irreducible, unknown and unknowable potentiality of life is levelled and dissipated by surplus information. This paper attempts to discuss the reactions of embodied body as memory post-cryonization, and to understand limits of psychological disability and death of consciousness after technological reconstruction of the disabled body. , RESUMO Este artigo fala sobre espaços pós-humanos e pós-vida tecnológica associados à fisionomia dos humanos. A alteração mecânica no funcionamento biológico é experimentada diretamente na apreensão da consciência orgânica. A ruptura na consciência divide-a em duas partes distintas - uma pertencente ao humano em desaparecimento, a outra ao cibernético emergente. O novo ser não é mais um humano, mas (um) outro humano, uma similitude diversa evoluída. No filme Realive (2016), encontramos uma extensão de um self além da morte, quando colocado em outro corpo. Porém, esse aprimoramento torna difusas todas as reações “naturais” e veículos de significado, principalmente o conhecimento da morte e da mortalidade. Em uma restauração Frankensteiniana clássica, Marc é reanimado, em 2084, por métodos de criogenização, sob a bandeira do “Projeto Lázaro”. As “humáquinas” pós-humanas dissolvem a posição do humano teleológico, estendendo o DNA à digitalidade. Upgrade (2018) mostra a metamorfose de Gray Trace, um ludita, via um chip intensificador biomecânico nele instalado, Stem. O implante, semelhante a uma barata, não apenas apaga o corpo tetraplégico de Grey, mas, ironicamente, “deseja” possuir e manobrar o corpo do seu hospedeiro. A consciência robótica nesses depois-de-humanos assimilados é uma consciência emprestada, ativada pela infusão da partícula biológica evanescente - a vida. Nanotecnologia, máquinas moleculares, manipuladores de nervos, câmeras implantadas no cérebro, nanorrobôs autogeradores e membros artificiais surgem como elementos da utopia/distopia pós-humana. Paradoxalmente, em ambos os filmes, os protagonistas, após sua reanimação e upgrade, tentam retornar às suas posições originais de morte e deficiência física. Buscando recuperar o corpo vivido, eles perdem sua reciprocidade corporificada com animais, máquinas e outras formas de vida. A potencialidade misteriosa, irredutível, desconhecida e incognoscível da vida é nivelada e dissipada pelo excedente informacional. Este artigo tenta discutir as reações do corpo físico enquanto memória pós-criogenização e compreender os limites da incapacitação psicológica e da morte da consciência após a reconstrução tecnológica do corpo deficiente.
-
The IEEE work-group for Symbiotic Autonomous Systems defined a Digital Twin as a digital representation or virtual model of any characteristics of a real entity (system, process or service), including human beings. Described characteristics are a subset of the overall characteristics of the real entity. The choice of which characteristics are considered depends on the purpose of the digital twin. This paper introduces the concept of Associative Cognitive Digital Twin, as a real time goal-oriented augmented virtual description, which explicitly includes the associated external relationships of the considered entity for the considered purpose. The corresponding graph data model, of the involved world, supports artificial consciousness, and allows an efficient understanding of involved ecosystems and related higher-level cognitive activities. The defined cognitive architecture for Symbiotic Autonomous Systems is mainly based on the consciousness framework developed. As a specific application example, an architecture for critical safety systems is shown.
-
The topic of AI continues in this chapter, this time looking at how we may regard AI as having intelligence, consciousness, and possibly a soul. The notion of an android soul is explored through science fiction series like Caprica and Black Mirror, and raises questions as to whether one is born with a soul, or does a soul develop over time? To explore this line of inquiry, I refer to Gurdjieff and Ouspensky’s work in the field of philosophy, as well as how Indic religions, like Buddhism, have begun to think about AI and consciousness.
-
Human and Machine Consciousness presents a new foundation for the scientific study of consciousness. It sets out a bold interpretation of consciousness that neutralizes the philosophical problems and explains how we can make scientific predictions about the consciousness of animals, brain-damaged patients and machines.
-
The main problem in robotics is strengthening of robot artificial intelligence (IA) system. Its solution will facilitate cooperation of man with robot. Authors suggest advanced technology for IA development. It borrows method of universal (deep) tutoring (TU) relying on semantic axiomatic method (AM). By method TU knowledge understanding is achieved by rational consciousness formation. It uses the utmost mathematical abstractions expressed on language of categories (LC). Being functional one LC is destined for intellectual processes (PIR) description due to its universal constructions. Following TU robot educational space (SER) is class of categories. Its IA sophistication occurs through new categories inclusion as required in robot IA multilevel hierarchical orientated network (NC) of concepts. Universal laws of robot functioning are embodied as operations of algebraic structures being objects of NC. It creates integrated environment of applications (IEA). Robot intercourse with man and its interaction with working space (SWR) make active PIR happening in NC. Processes of assignments execution (PER) begin just when satisfaction to a set of relations in SWR and in robot space of notions is a success. Possibility of PIR to climb up the highest levels of NC and down the lowest ones endows robot with capability to generate PER making decisions in unfamiliar SWR.
-
Estrada, D. (2018). Conscious enactive computation. arXiv. https://doi.org/10.48550/ARXIV.1812.02578
This paper looks at recent debates in the enactivist literature on computation and consciousness in order to assess major obstacles to building artificial conscious agents. We consider a proposal from Villalobos and Dewhurst (2018) for enactive computation on the basis of organizational closure. We attempt to improve the argument by reflecting on the closed paths through state space taken by finite state automata. This motivates a defense against Clark's recent criticisms of "extended consciousness", and perhaps a new perspective on living with machines.
-
Cet article s’intéresse à la façon dont la série de science-fiction Westworld (HBO, 2016-présent) interroge la nature même de la conscience au travers d’une narration réflexive qui mêle la question du libre-arbitre et du sens de soi à celle de l’écriture du personnage sériel, s’appuyant ainsi sur l’héritage des séries de science-fiction narrativement complexes qui ont abordé ce thème au fil des dernières décennies. Après un point sur le « hard problem » de la définition de la conscience, l’article suit la logique de la série en interrogeant successivement la mémoire, l’improvisation et l’intérêt personnel, tout en mobilisant une analyse narratologique fondée notamment sur la théorie des mondes possibles. , This paper describes how the science fiction television series Westworld (HBO, 2016-present) questions the very nature of consciousness through a reflexive narrative blending matters of free will and self interest with clues on how to write a serial character, thus drawing on the rich heritage of narratively complex science fiction television series of the last decades. After detailing the “hard problem” posed by any definition of consciousness, the paper follows the series’ logic, successively questioning memory, improvisation and self-interest, while focusing on a narratological approach centered on possible worlds theory.
-
From the perspective of virtue ethics, this paper points out that Artificial Intelligence becomes more and more like an ethic subject which can take responsibility with its improvement of autonomy and sensitivity. This paper intends to point out that it will produce many problems to tackle the questions of ethics of Artificial Intelligence through programming the codes of abstract moral principle. It is at first a social integration question rather than a technical question when we talk about the question of AI’s ethics. From the perspective of historical and social premises of ethics, in what kind of degree Artificial Intelligence can share the same ethics system with human equals to the degree of its integration into the narrative of human’s society. And this is also a process of establishing a common social cooperation system between human and Artificial Intelligence. Furthermore, self-consciousness and responsibility are also social conceptions that established by recognition, and the Artificial Intelligence’s identity for its individual social role is also established in the process of integration.
-
The purpose of the attention schema theory is to explain how an information-processing device, the brain, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.
-
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
-
Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self to self-determine; (b) there must be a non-zero probability of more than one option being enacted; (c) there must be an internal means of choosing among options (which is not merely random, since randomness is not a choice). For (a) to be fulfilled, the agent of self-determination must be organisationally closed (a “Kantian whole”). For (c) to be fulfilled: (d) options must be generated from an internal model of the self which can calculate future states contingent on possible responses; (e) choosing among these options requires their evaluation using an internally generated goal defined on an objective function representing the overall “master function” of the agent and (f) for “deep free-will”, at least two nested levels of choice and goal (d–e) must be enacted by the agent. The agent must also be able to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. The main impediment to free-will in present-day artificial robots, is their lack of being a Kantian whole. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low and include relatively simple life-forms that are at least able to learn.
-
This chapter aims to evaluate Integrated Information Theory's claims concerning Artificial Consciousness. Integrated Information Theory (IIT) works from premises that claim that certain properties, such as unity, are essential to consciousness, to conclusions regarding the constraints upon physical systems that could realize consciousness. Among these conclusions is the claim that feed-forward systems, and systems that are not largely reentrant, necessarily will fail to generate consciousness (but may simulate it). This chapter will discuss the premises of IIT, which themselves are highly controversial, and will also address IIT's related rejection of functionalism. This analysis will argue that IIT has failed to established good grounds for these positions, and that convincing alternatives remain available. This, in turn, implies that the constraints upon Artificial Consciousness are more generous than IIT would have them be.
-
Artificial intelligence and research on consciousness have reciprocally influenced each other: theories about consciousness have inspired work on artificial intelligence, and the results from artificial intelligence have changed our interpretation of the mind. Artificial intelligence can be used to test theories of consciousness and research has been carried out on the development of machines with the behavior, cognitive characteristics and architecture associated with consciousness. Some people have argued against the possibility of machine consciousness, but none of these objections is conclusive and many theories openly embrace the possibility that phenomenal consciousness could be realized in artificial systems.
-
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness.
-
One model for creating artificial consciousness is replicating every fine detail of the brain on computers and setting the model in motion. Consciousness has been experimentally demonstrated to be a much more fragmented experience than we think it to be, perhaps we only need snippets of ourselves to feel conscious. Perhaps consciousness is nothing less and nothing more than story, and all we need do to continue to feel conscious is maintain identity through computer-based narrative. Applied nanotechnology has generated uncountable applications in electronics, pharmacology, and materials engineering. The best approach to life extension and consciousness expansion might lie in our own marvelously complex and entire bodies, meshed with and augmented by tiny bionan machines that become a part of us, rather than the opposite vision of humans migrating into a machine substrate.
-
I propose a physicalist theory of consciousness that is an extension of the theory of noémona species. The proposed theory covers the full consciousness spectrum from animal to machine and its human consciousness base is compatible with the corresponding work of Wundt, James, and Freud. The paper is organized in three sections. In the first, I briefly justify the methodology used. In Sec. 2, I state the inadequacies of the major work on the nature of consciousness and present a definitional system that adequately describes its changing nature and scope. Finally in Sec. 3, I state some of the consequences of the theory and introduce some of its future extensions.
-
Data assimilation is naturally conceived as the synchronization of two systems, “truth” and “model”, coupled through a limited exchange of information (observed data) in one direction. Though investigated most thoroughly in meteorology, the task of data assimilation arises in any situation where a predictive computational model is updated in run time by new observations of the target system, including the case where that model is a perceiving biological mind. In accordance with a view of a semi-autonomous mind evolving in synchrony with the material world, but not slaved to it, the goal is to prescribe a coupling between truth and model for maximal synchronization. It is shown that optimization leads to the usual algorithms for assimilation via Kalman Filtering under a weak linearity assumption. For nonlinear systems with model error and sampling error, the synchronization view gives a recipe for calculating covariance inflation factors that are usually introduced on an ad hoc basis. Consciousness can be framed as self-perception, and represented as a collection of models that assimilate data from one another and collectively synchronize. The combination of internal and external synchronization is examined in an array of models of spiking neurons, coupled to each other and to a stimulus, so as to segment a visual field. The inter-neuron coupling appears to enhance the overall synchronization of the model with reality.
-
The proponents of machine consciousness predicate the mental life of a machine, if any, exclusively on its formal, organizational structure, rather than on its physical composition. Given that matter is organized on a range of levels in time and space, this generic stance must be further constrained by a principled choice of levels on which the posited structure is supposed to reside. Indeed, not only must the formal structure fit well the physical system that realizes it, but it must do so in a manner that is determined by the system itself, simply because the mental life of a machine cannot be up to an external observer. To illustrate just how tall this order is, we carefully analyze the scenario in which a digital computer simulates a network of neurons. We show that the formal correspondence between the two systems thereby established is at best partial, and, furthermore, that it is fundamentally incapable of realizing both some of the essential properties of actual neuronal systems and some of the fundamental properties of experience. Our analysis suggests that, if machine consciousness is at all possible, conscious experience can only be instantiated in a class of machines that are entirely different from digital computers, namely, timecontinuous, open analog dynamical systems.