Your search

In authors or contributors
  • In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this work, we explore the validity and potential application of this seemingly intuitive link between consciousness and intelligence. We do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST). We find that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Having identified this trend, we use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a single unified and implementable model. Given that it is made possible by cognitive abilities underlying each of the three functional theories, artificial agents capable of mental time travel would not only possess greater general intelligence than current approaches, but also be more consistent with our current understanding of the functional role of consciousness in humans, thus making it a promising near-term goal for AI research.

  • In this paper, we propose a hypothesis that consciousness has evolved to serve as a platform for general intelligence. This idea stems from considerations of potential biological functions of consciousness. Here we define general intelligence as the ability to apply knowledge and models acquired from past experiences to generate solutions to novel problems. Based on this definition, we propose three possible ways to establish general intelligence under existing methodologies for constructing AI systems, namely solution by simulation, solution by combination and solution by generation. Then, we relate those solutions to putative functions of consciousness put forward, respectively, by the information generation theory, the global workspace theory, and a form of higher order theory where qualia are regarded as meta-representations. Based on these insights, We propose that consciousness integrates a group of specialized generative/forward models and forms a complex in which combinations of those models are flexibly formed and that qualia are meta-representations of first-order mappings which endow an agent with the ability to choose which maps to use to solve novel problems. These functions can be implemented as an ``artificial consciousness''. Such systems can generate policies based on a small number of trial and error for solving novel problems. Finally, we propose possible directions for future research into artificial consciousness and artificial general intelligence.

  • The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields. Questions answered in this article BetaPowered by GenAI This is generative AI content and the quality may vary. Learn more . How can meta-learning facilitate the development of more general forms of artificial intelligence? What recent advancements have been made in integrating meta-learning into deep Reinforcement Learning (RL)? How do model-based Reinforcement Learning algorithms facilitate meta-learning? What computational and empirical results are relevant to meta-learning in both artificial intelligence and the brain? What are the implications of brain-inspired model-based Reinforcement Learning for artificial learning systems?

  • Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

Last update from database: 3/23/25, 8:36 AM (UTC)