Your search

In authors or contributors
  • As artificial intelligence (AI) continues to advance, it is natural to ask whether AI systems can be not only intelligent, but also conscious. I consider why people might think AI could develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, challenging the assumption that computation provides a sufficient basis for consciousness. I’ll instead make the case that consciousness depends on our nature as living organisms – a form of biological naturalism. I lay out a range of scenarios for conscious AI, concluding that real artificial consciousness is unlikely along current trajectories, but becomes more plausible as AI becomes more brain-like and/or life-like. I finish by exploring ethical considerations arising from AI that either is, or convincingly appears to be, conscious. If we sell our minds too cheaply to our machine creations, we not only overestimate them – we underestimate our selves.

  • Machine (artificial) consciousness can be interpreted in both strong and weak forms, as an instantiation or as a simulation. Here, I argue in favor of weak artificial consciousness, proposing that synthetic models of neural mechanisms potentially underlying consciousness can shed new light on how these mechanisms give rise to the phenomena they do. The approach I advocate involves using synthetic models to develop "explanatory correlates" that can causally account for deep, structural properties of conscious experience. In contrast, the project of strong artificial consciousness — while not impossible in principle — has yet to be credibly illustrated, and is in any case less likely to deliver advances in our understanding of the biological basis of consciousness. This is because of the inherent circularity involved in using models both as instantiations and as cognitive prostheses for exposing general principles, and because treating models as instantiations can indefinitely postpone comparisons with empirical data.

  • Which systems/organisms are conscious? New tests for consciousness (‘C-tests’) are urgently needed. There is persisting uncertainty about when consciousness arises in human development, when it is lost due to neurological disorders and brain injury, and how it is distributed in nonhuman species. This need is amplified by recent and rapid developments in artificial intelligence (AI), neural organoids, and xenobot technology. Although a number of C-tests have been proposed in recent years, most are of limited use, and currently we have no C-tests for many of the populations for which they are most critical. Here, we identify challenges facing any attempt to develop C-tests, propose a multidimensional classification of such tests, and identify strategies that might be used to validate them.

Last update from database: 3/23/25, 8:36 AM (UTC)