Search

Full bibliography 558 resources

  • Using the events of the HBO series Westworld (2016–2022) as a springboard, this paper attempts to elicit a number of philosophical arguments, dilemmas, and questions concerning technology and artificial intelligence (AI). The paper is intended to encourage readers to learn more about intriguing technophilosophical debates. The first section discusses the dispute between memory and consciousness in the context of an artificially intelligent robot. The second section delves into the issues of reality and morality for humans and AI. The final segment speculates on the potential of a social interaction between sentient AI and humans. The narrative of the show serves as a glue that binds together the various ideas that are covered during the show, which in turn makes the philosophical discussions more intriguing.

  • As artificial intelligence (AI) continues to advance, it is natural to ask whether AI systems can be not only intelligent, but also conscious. I consider why people might think AI could develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, challenging the assumption that computation provides a sufficient basis for consciousness. I’ll instead make the case that consciousness depends on our nature as living organisms – a form of biological naturalism. I lay out a range of scenarios for conscious AI, concluding that real artificial consciousness is unlikely along current trajectories, but becomes more plausible as AI becomes more brain-like and/or life-like. I finish by exploring ethical considerations arising from AI that either is, or convincingly appears to be, conscious. If we sell our minds too cheaply to our machine creations, we not only overestimate them – we underestimate our selves.

  • Objective: And ultimate goal of this paper is to describe a realistic future in how humanity and life can survive immortally by creating humanoid robots from a human master with a consciousness of the human who would serve the human master as a companion and learn everything about the consciousness of the human master. The Contributions: Are to present a groundbreaking methodology for the immortality of humanoid robots with a human consciousness. In this paper, we emphasize that the current humanoid robotics technologies have reached the sophistication to design and fabricate intelligent AI computers to allow humanoids to survive immortally. Once human life is close to being over (age or sickness), the humanoid will take over and can stay alive as long as it has the necessary energy to live on. These humanoids can even travel through space and other planets, opening up a whole new frontier for exploration and life. They can benefit from the Quantum Entanglement to move through space to any destination.

  • The ideas of this book originate from the mobile WAVE approach which allowed us, more than a half century ago, to implement citywide heterogeneous computer networks and solve distributed problems on them well before the internet. The invented paradigm evolved into Spatial Grasp Technology and resulted in a European patent and eight books. The volumes covered concrete applications in graph and network theory, defense and social systems, crisis management, simulation of global viruses, gestalt theory, collective robotics, space research, and related concepts. The obtained solutions often exhibited high system qualities like global integrity, distributed awareness, and even consciousness. This current book takes these important characteristics as primary research objectives, together with the theory of patterns covering them all. This book is oriented towards system scientists, application programmers, industry managers, defense and security commanders, and university students (especially those interested in advanced MSc and PhD projects on distributed system management), as well as philosophers, psychologists, and United Nations personnel.

  • The article reflects various approaches of philosophy and programming to methods for solving the technical problem of creating and software implementation of artificial consciousness (AC). Various purposes of creation and basic approaches to determining the nature of AC are described. To solve the problem of creating an AC, an architecture is proposed that includes ten levels, starting from the basic level of collecting and systematizing information about the external world and ending with the upper level of influence on it, agreed with the person and the level of decision-making. The features of the delimitation of functions and the procedure for interaction between a person and an AC are considered in detail. In conclusion, the most important, from a programmer’s point of view, properties that characterize artificial consciousness are given.</p>

  • The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models. It is proposed that for a system to be conscious, there must be a straightforward relationship between the material entities that compose the system and the realizers of functional roles, that the realizers of the functional roles must play their roles due to internal causal powers, and that they must continue to exist over time.

  • On broadly Copernican grounds, we are entitled to default assume that apparently behaviorally sophisticated extraterrestrial entities ("aliens") would be conscious. Otherwise, we humans would be inexplicably, implausibly lucky to have consciousness, while similarly behaviorally sophisticated entities elsewhere would be mere shells, devoid of consciousness. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness in humans ("consciousness mimics"), and in particular a broad class of current, near-future, and hypothetical robots. These considerations, which we formulate, respectively, as the Copernican and Mimicry Arguments, jointly defeat an otherwise potentially attractive parity principle, according to which we should apply the same types of behavioral or cognitive tests to aliens and robots, attributing or denying consciousness similarly to the extent they perform similarly. Instead of grounding speculations about alien and robot consciousness in metaphysical or scientific theories about the physical or functional bases of consciousness, our approach appeals directly to the epistemic principles of Copernican mediocrity and inference to the best explanation. This permits us to justify certain default assumptions about consciousness while remaining to a substantial extent neutral about specific metaphysical and scientific theories.

  • In this paper, I’ll examine whether we could be justified in attributing consciousness to artificial intelligent systems. First, I’ll give a brief history of the concept of artificial intelligence (AI) and get clear on the terms I’ll be using. Second, I’ll briefly review the kinds of AI programs on offer today, identifying which research program I think provides the best candidate for machine consciousness. Lastly, I’ll consider the three most plausible ways of knowing whether a machine is conscious: (1) an AI demonstrates a sufficient level of organizational similarity to that of a human thinker, (2) an inference to the best explanation, and (3) what I call “punting to panpsychism”, i.e., the idea that if everything is conscious, then we get machine consciousness in AI for free. However, I argue that all three of these methods for attributing machine consciousness are inadequate since they each face serious philosophical problems which I will survey and specifically tailor to each method.

  • How is language related to consciousness? Language functions to categorise perceptual experiences (e.g., labelling interoceptive states as 'happy') and higher-level constructs (e.g., using 'I' to represent the narrative self). Psychedelic use and meditation might be described as altered states that impair or intentionally modify the capacity for linguistic categorisation. For example, psychedelic phenomenology is often characterised by 'oceanic boundlessness' or 'unity' and 'ego dissolution', which might be expected of a system unburdened by entrenched language categories. If language breakdown plays a role in producing such altered behaviour, multimodal artificial intelligence might align more with these phenomenological descriptions when attention is shifted away from language. We tested this hypothesis by comparing the semantic embedding spaces from simulated altered states after manipulating attentional weights in CLIP and FLAVA models to embedding spaces from altered states questionnaires before manipulation. Compared to random text and various other altered states including anxiety, models were more aligned with disembodied, ego-less, spiritual, and unitive states, as well as minimal phenomenal experiences, with decreased attention to language and vision. Reduced attention to language was associated with distinct linguistic patterns and blurred embeddings within and, especially, across semantic categories (e.g., 'giraffes' become more like 'bananas'). These results lend support to the role of language categorisation in the phenomenology of altered states of consciousness, like those experienced with high doses of psychedelics or concentration meditation, states that often lead to improved mental health and wellbeing.

  • This paper presents a breakthrough approach to artificial general intelligence (AGI). The criteria of AGI named in the literature go beyond the boundaries of actual intelligence and point to the necessity of modeling consciousness. Consciousness is a functional organ that has no structural localization; its modeling is possible by modeling functions immanent to consciousness. One of the basic functions is sensation - the image of an external influence or the internal state of an organism coming into consciousness. We turn to the concept of sensation presented in the Anthropology of Hegel's Philosophy of Spirit, according to which any content, including spiritual, ethical, logical, and other content comes into consciousness through its embodiment in the form of sensation. The results of neurobiological and psychophysiological experiments (electroencephalograms, MRI), which record the connection of sensations and cognitive acts with mental states and changes in the neural environment of the brain, point to the realism of Hegel's philosophical concept and the legitimacy of its application to the solution of scientific and technical problems. The paper argues for the realism of the Hegelian philosophical concept of sensation and discusses the possibility of modeling the activity of consciousness by operating with complexes of sensations in terms of attention, content manipulation, and volitional acts. The principle of linking (embodiment) of sense (mental) and signifying (sensed) content is expressed by the thesis - “consciousness is a kind of sensation”. Prospective developments of AGI obtain original conceptual semantics for solving hard-to-formalize problems on modeling intelligence and consciousness. #CSOC1120.

  • This brief technical synopsis points to the key role of AI tools in enhancing human spiritual development. The analysis foresses a deepening inegration of learning Torah and science via AI tools, thus extending human spiritual consciousness by memory, speed and cognition, i.e. a new stage of Judaism is predicted, with respect to our tech-know-logical information age.

  • The tech-know-logical role of AI/AC is extended by the concept of artificial cognition (ACO), With respect to a science of learning. AI tools are understood to empower the human mind for learning the cosmic and structural principles (laws) of our autodidactic universe to live a more human species-appropriate and nature-sensitive life in advanced harmony. Meta-technology, in ethical and rational terms is required for this evolutionary step towards human creativeness.

  • The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future.

  • AI systems that do what they say, are reliable, trustworthy, and explainable are what people want. We propose a DIKWP (Data, Information, Knowledge, Wisdom, and Purpose) artificial consciousness white box evaluation standard and method for AI systems. We categorize AI system output resources into deterministic and uncertain resources, which include incomplete, inconsistent, and imprecise data. We then map these resources to the DIKWP framework for testing. For deterministic resources, we evaluate their transformation into different resource types based on purpose. For uncertain resources, we evaluate their potential conversion into other deterministic resources through purpose-driven. We construct an AI diagnostic scenario using a 2S-dimensional (SxS) framework to evaluate both deterministic and uncertain DIKWP resources. The experimental results show that the DIKWP artificial consciousness white box evaluation standard and method effectively assess the cognition capabilities of AI systems and demonstrate a certain level of interpretability, thus contributing to AI system improvement and evaluation.

  • The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical kind of possibility by arguing that machines can share the human form of life and thus acquire human mindedness, which is to say they can be intelligent, conscious, sentient, etc. in precisely the way that a human being typically is.

  • Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. One of the challenges is to understand and model the role of consciousness and, in particular, its subjective perspective as an internal level of representation (including for social cognition) in the governance of behaviour. Toward this aim, we implemented the principles of the Projective Consciousness Model (PCM) into artificial agents embodied as virtual humans, extending a previous implementation of the model. Our goal was to offer a proof-of-concept, based purely on simulations, as a basis for a future methodological framework. Its overarching aim is to be able to assess hidden psychological parameters in human participants, based on a model relevant to consciousness research, in the context of experiments in virtual reality. As an illustration of the approach, we focused on simulating the role of Theory of Mind (ToM) in the choice of strategic behaviours of approach and avoidance to optimise the satisfaction of agents’ preferences. We designed a main experiment in a virtual environment that could be used with real humans, allowing us to classify behaviours as a function of order of ToM, up to the second order. We show that agents using the PCM demonstrated expected behaviours with consistent parameters of ToM in this experiment. We also show that the agents could be used to estimate correctly each other’s order of ToM. Furthermore, in a supplementary experiment, we demonstrated how the agents could simultaneously estimate order of ToM and preferences attributed to others to optimize behavioural outcomes. Future studies will empirically assess and fine tune the framework with real humans in virtual reality experiments.

  • Conscious sentient AI seems to be all but a certainty in our future, whether in fifty years’ time or only five years. When that time comes, we will be faced with entities with the potential to experience more pain and suffering than any other living entity on Earth. In this paper, we look at this potential for suffering and the reasons why we would need to create a framework for protecting artificial entities. We look to current animal welfare laws and regulations to investigate why certain animals are given legal protections, and how this can be applied to AI. We use a meta-theory of consciousness to determine what developments in AI technology are needed to bring AI to the level of animal sentience where legal arguments for their protection can be made. We finally speculate on what a future conscious AI could look like based on current technology.

  • Various interpretations of the literature detailing the neural basis of learning have in part led to disagreements concerning how consciousness arises. Further, artificial learning model design has suffered in replicating intelligence as it occurs in the human brain. Here, we present a novel learning model, which we term the “Recommendation Architecture (RA) Model” from prior theoretical works proposed by Coward, using a dual-learning approach featuring both consequence feedback and non-consequence feedback. The RA model is tested on a categorical learning task where no two inputs are the same throughout training and/or testing. We compare this to three consequence feedback only models based on backpropagation and reinforcement learning. Results indicate that the RA model learns novelty more efficiently and can accurately return to prior learning after new learning with less computational resources expenditure. The final results of the study show that consequence feedback as interpretation, not creation, of cortical activity creates a learning style more similar to human learning in terms of resource efficiency. Stable information meanings underlie conscious experiences. The work provided here attempts to link the neural basis of nonconscious and conscious learning while providing early results for a learning protocol more similar to human brains than is currently available.

  • A new synergetic approach to consciousness modeling is proposed, which takes into account recent advancements in neuroscience, information technologies, and philosophy.

  • The fields of artificial intelligence (AI) and artificial consciousness (AC) have largely developed separately, with different goals and criteria for success and with only a minimal exchange of ideas. In this chapter, we consider the question of how concepts developed in AC research might contribute to more effective future AI systems. We first briefly discuss several past hypotheses about the function(s) of human consciousness, and present our own hypothesis that short-term working memory and very rapid learning should be a central concern in such matters. We describe our recent efforts to explore this hypothesis computationally and to identify associated computational correlates of consciousness. We then present ideas about how integrating concepts from AC into AI systems to develop an artificial conscious intelligence (ACI) could both produce more effective AI technology and contribute to a deeper scientific understanding of the fundamental nature of consciousness and intelligence.

Last update from database: 3/23/25, 8:36 AM (UTC)