Your search
Results 6 resources
-
Understanding the nature of consciousness is one of the grand outstanding scientific challenges. The fundamental methodological problem is how phenomenal first person experience can be accounted for in a third person verifiable form, while the conceptual challenge is to both define its function and physical realization. The distributed adaptive control theory of consciousness (DACtoc) proposes answers to these three challenges. The methodological challenge is answered relative to the hard problem and DACtoc proposes that it can be addressed using a convergent synthetic methodology using the analysis of synthetic biologically grounded agents, or quale parsing. DACtoc hypothesizes that consciousness in both its primary and secondary forms serves the ability to deal with the hidden states of the world and emerged during the Cambrian period, affording stable multi-agent environments to emerge. The process of consciousness is an autonomous virtualization memory, which serializes and unifies the parallel and subconscious simulations of the hidden states of the world that are largely due to other agents and the self with the objective to extract norms. These norms are in turn projected as value onto the parallel simulation and control systems that are driving action. This functional hypothesis is mapped onto the brainstem, midbrain and the thalamo-cortical and cortico-cortical systems and analysed with respect to our understanding of deficits of consciousness. Subsequently, some of the implications and predictions of DACtoc are outlined, in particular, the prediction that normative bootstrapping of conscious agents is predicated on an intentionality prior. In the view advanced here, human consciousness constitutes the ultimate evolutionary transition by allowing agents to become autonomous with respect to their evolutionary priors leading to a post-biological Anthropocene. This article is part of the themed issue ‘The major synthetic evolutionary transitions’.
-
Reviewing recent closely related developments at the crossroads of biomedical engineering, artificial intelligence and biomimetic technology, in this paper, we attempt to distinguish phenomenological consciousness into three categories based on embodiment: one that is embodied by biological agents, another by artificial agents and a third that results from collective phenomena in complex dynamical systems. Though this distinction by itself is not new, such a classification is useful for understanding differences in design principles and technology necessary to engineer conscious machines. It also allows one to zero-in on minimal features of phenomenological consciousness in one domain and map on to their counterparts in another. For instance, awareness and metabolic arousal are used as clinical measures to assess levels of consciousness in patients in coma or in a vegetative state. We discuss analogous abstractions of these measures relevant to artificial systems and their manifestations. This is particularly relevant in the light of recent developments in deep learning and artificial life.
-
In this perspective article, we show that a morphospace, based on information-theoretic measures, can be a useful construct for comparing biological agents with artificial intelligence (AI) systems. The axes of this space label three kinds of complexity: (i) autonomic, (ii) computational and (iii) social complexity. On this space, we map biological agents such as bacteria, bees, C. elegans, primates and humans; as well as AI technologies such as deep neural networks, multi-agent bots, social robots, Siri and Watson. A complexity-based conceptualization provides a useful framework for identifying defining features and classes of conscious and intelligent systems. Starting with cognitive and clinical metrics of consciousness that assess awareness and wakefulness, we ask how AI and synthetically engineered life-forms would measure on homologous metrics. We argue that awareness and wakefulness stem from computational and autonomic complexity. Furthermore, tapping insights from cognitive robotics, we examine the functional role of consciousness in the context of evolutionary games. This points to a third kind of complexity for describing consciousness, namely, social complexity. Based on these metrics, our morphospace suggests the possibility of additional types of consciousness other than biological; namely, synthetic, group-based and simulated. This space provides a common conceptual framework for comparing traits and highlighting design principles of minds and machines.
-
Humans are active agents in the design of artificial intelligence (AI), and our input into its development is critical. A case is made for recognizing the importance of including non-ordinary functional capacities of human consciousness in the development of synthetic life, in order for the latter to capture a wider range in the spectrum of neurobiological capabilities. These capacities can be revealed by studying self-cultivation practices designed by humans since prehistoric times for developing non-ordinary functionalities of consciousness. A neurophenomenological praxis is proposed as a model for self-cultivation by an agent in an entropic world. It is proposed that this approach will promote a more complete self-understanding in humans and enable a more thoroughly mutually-beneficial relationship between in life in vivo and in silico.
-
Is artificial consciousness theoretically possible? Is it plausible? If so, is it technically feasible? To make progress on these questions, it is necessary to lay some groundwork clarifying the logical and empirical conditions for artificial consciousness to arise and the meaning of relevant terms involved. Consciousness is a polysemic word: researchers from different fields, including neuroscience, Artificial Intelligence, robotics, and philosophy, among others, sometimes use different terms in order to refer to the same phenomena or the same terms to refer to different phenomena. In fact, if we want to pursue artificial consciousness, a proper definition of the key concepts is required. Here, after some logical and conceptual preliminaries, we argue for the necessity of using dimensions and profiles of consciousness for a balanced discussion about their possible instantiation or realisation in artificial systems. Our primary goal in this paper is to review the main theoretical questions that arise in the domain of artificial consciousness. On the basis of this review, we propose to assess the issue of artificial consciousness within a multidimensional account. The theoretical possibility of artificial consciousness is already presumed within some theoretical frameworks; however, empirical possibility cannot simply be deduced from these frameworks but needs independent empirical validation. Analysing the complexity of consciousness we here identify constituents and related components/dimensions, and within this analytic approach reflect pragmatically about the general challenges that the creation of artificial consciousness confronts. Our aim is not to demonstrate conclusively either the theoretical plausibility or the empirical feasibility of artificial consciousness, but to outline a research strategy in which we propose that "awareness" may be a potentially realistic target for realisation in artificial systems.
-
While consciousness has been historically a heavily debated topic, awareness had less success in raising the interest of scholars. However, more and more researchers are getting interested in answering questions concerning what awareness is and how it can be artificially generated. The landscape is rapidly evolving, with multiple voices and interpretations of the concept being conceived and techniques being developed. The goal of this paper is to summarize and discuss the ones among these voices connected with projects funded by the EIC Pathfinder Challenge “Awareness Inside” callwithin Horizon Europe, designed specifically for fostering research on natural and synthetic awareness. In this perspective, we dedicate special attention to challenges and promises of applying synthetic awareness in robotics, as the development of mature techniques in this new field is expected to have a special impact on generating more capable and trustworthy embodied systems.