Search

Full bibliography 558 resources

  • A classic problem for artificial intelligence is to build a machine that imitates human behavior well enough to convince those who are interacting with it that it is another human being [1]. One approach to this problem focuses on building machines that imitate internal psychological facets of human interaction, such as artificially intelligent agents that play grandmaster chess [2]. Another approach focuses on building machines that imitate external psychological facets by building androids [3]. The disparity between these approaches reflects a problem with both: Artificial intelligence abstracts mentality from embodiment, while android science abstracts embodiment from mentality. This problem needs to be solved, if a sentient artificial entity that is indistinguishable from a human being, is to be constructed. One solution is to examine a fundamental human ability and context in which both the construction of internal cognitive models and an appropriate external social response are essential. This paper considers how reasoning with intent in the context of human vs. android strategic interaction may offer a psychological benchmark with which to evaluate the human-likeness of android strategic responses. Understanding how people reason with intent may offer a theoretical context in which to bridge the gap between the construction of sentient internal and external artificial agents.

  • This chapter discusses the observation that human emotions sometimes have unfortunate effects, which raises the concern that robot emotions might not always be optimal. It analyzes brain mechanisms for vision and language to ground an evolutionary account relating motivational systems to emotions and the cortical systems which elaborate them. It also attempts to determine how to address the issue of characterizing emotions in such a way that a robot can be considered to have emotions, even though they are not emphatically linked to human emotions.

  • The author found the "Journal of Consciousness Studies," (JCS) issue on Machine Consciousness, (2003), frustrating and alienating. It is argued that there seems to be a consensus building that consciousness is accessible to scientific scrutiny, so much so that it is already understood well enough to be modeled and even synthesized. It could be instead that the vocabulary of consciousness is being subtly redefined to be amenable to scientific investigation and explicit modeling. Such semantic revisionism is confusing and often misleading. Whatever else consciousness is, it is at least a certain quality of life apparent from personal reflection. Introspection is, after all, the only way we know that consciousness even exists. Scientific and technical redefinitions that fail to account for its phenomenal quality are at best incomplete. In the author's view, all but one of the ten articles in the JCS volume on Machine Consciousness commit various degrees of Protean distortion. In this collection of articles, common sense terms describing consciousness were consistently distorted into special uses that strip them of their meaning. The author has tried to point out in criticism of the JCS articles, explicit principles that are missing in the account of machine consciousness, leaving the various inferences about consciousness open to charges of being distorted and illegitimate. (PsycINFO Database Record (c) 2017 APA, all rights reserved)

  • In 1994 John Searle stated (Searle 1994: 11-12) that the Chinese Room Argument (CRA) is an attempt to prove the truth of the premise: led him to the conclusion that ‘programs are not minds’ and hence that computationalism, the idea that the essence of thinking lies in computational processes and that such processes thereby underlie and explain conscious thinking, is false. The argument presented in this chapter is not a direct attack or defence of the CRA, but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics.1 However, in contrast to the CRA’s critique of the link between syntax and semantics, this chapter will explore the associated link between syntax and physics.

  • Since the beginnings of computer technology, researchers have speculated about the possibility of building smart machines that could compete with human intelligence. Given the current pace of advances in artificial intelligence and neural computing, such an evolution seems to be a more concrete possibility. Many people now believe that artificial consciousness is possible and that, in the future, it will emerge in complex computing machines. However, a discussion of artificial consciousness gives rise to several philosophical issues: can computers think or do they just calculate? Is consciousness a human prerogative? Does consciousness depend on the material that comprises the human brain, or can computer hardware replicate consciousness? Answering these questions is difficult because it requires combining information from many disciplines including computer science, neurophysiology, philosophy, and religion. Further, we must consider the influence of science fiction, especially science fiction films, when addressing artificial consciousness. As a product of the human imagination, such works express human desires and fears about future technologies and may influence the course of progress. At a societal level, science fiction simulates future scenarios that can help prepare us for crucial transitions by predicting the consequences of significant technological advances. The paper considers robots in science fiction, the Turing test, computer chess and artificial consciousness.

  • Within a technological context, this volume addresses contemporary theories of consciousness, subjective experience, the creation of meaning and emotion, and relationships between cognition and location. Its focus is both on and beyond the digital culture, seeking to assimilate new ideas emanating from the physical sciences as well as embracing spiritual and artistic aspects of human experience. Developing on the studies published in Roy Ascott's successful Reframing Consciousness, the book documents the very latest work from those connected with the internationally acclaimed CAiiA-STAR centre and its conferences.

  • Three fundamental questions concerning minds are presented. These are about consciousness, intentionality and intelligence. After we present the fundamental framework that has shaped both the philosophy of mind and the Artificial Intelligence research in the last forty years or so regarding the last two questions, we turn to consciousness, whose study still seems evasive to both communities. After briefly illustrating why and how phenomenal consciousness is puzzling, a theoretical diagnosis of the problem is proposed and a framework is presented, within which further research would yield a solution. The diagnosis is that the puzzle stems from a peculiar dual epistemic access to phenomenal aspects (qualia) of our conscious experiences. An account of concept formation is presented such that both the phenomenal concepts (like the concepts RED and SWEET) and the introspective concepts (like the concepts EXPERIENCING RED and TASTING SWEET) are acquired from a firstperson perspective as opposed to the third-person one (the standard concept formation strategy about objective features). We explain the first-person perspective in information-theoretic and computational terms: Nature (the Art whereby God hath made and governes the World) is by the Art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. For seeing life is but a motion of Limbs, the beginning whereof is in some principall part within; why may we not say, that all Automata (Engines that move themselves by springs and wheels as doth a watch) have an artificiall life? For what is the Heart, but a Spring; and the Nerves but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer? Art goes yet further, imitating that Rationall and most excellent worke of Nature, Man. (Hobbes 1651, p. 81) So declared Thomas Hobbes in 1651 in the Introduction to his well-known work, Leviathan, published one year after Réne Descartes' death. Descartes was also interested in mechanical explanations of bodily processes and organic life. In fact, on the basis of his neuroanatomical and physiological studies, as well as philosophical arguments, Descartes had already argued that human and animal bodies could be mechanically understood as complicated and intricately designed machines (Descartes 1664). What differentiated Descartes from Hobbes lay in his belief that human beings, unlike non-human animals, were not merely bodies; they were unions of material bodies and immaterial souls. The immaterial soul was necessary for Descartes to explain the peculiar capacities and activities of the human mind. As such, materialist mechanical explanations could never be sufficient to account for the whole human being.

  • Perception has both unconscious and conscious aspects. In all cases, however, what we perceive is a model of reality. By brain construction through evolution, we divide the world into two parts--our body and the outside world. But the process is the same in both cases. We perceive a construct usually governed by sensed data but always involving memory, goals, fears, expectations, etc. As a first step toward Artificial Perception in man-made systems, we examine perception in general here.

  • This paper addresses the relationship of consciousness to artificial life set in the context of art. Artificial life is as much a part of our quest for self-definition as an instrument in the construction of reality. In exploring the technology of life we are exploring the possibilities of what we might become. In our hypermediated, telematic culture, the self acquires an essentially non-linear identity. Telepresence and virtual reality, the avatars of Net life, present us with a distributed, multiple identity which in turn is producing a radically new art. This embodies an ‘interstitial practice’ set within the domain of artificial life, and located at the intersections of cognitive science, bio-engineering, telematics and metaphysics. Can artists find in artificial life, nanotechnology, robotics and molecular engineering the means towards a re-materialization of art, after its postmodern, screen-based dematerialization? Just as ideas of the ‘immaterial’ have dominated art discourse for the last 15 years, so questions of emergent form, intelligent structures and artificial life are shaping a new discourse, from which art is moving off the screen and back into the material world. Will the real significance of art's re-materialization be at the level of mind? Will artificial life only gain cultural significance when it gives rise to artificial mind and the construction of consciousness?

  • Biosystems are unitary entities that are alive to some degree as a system. They occur at scales ranging from the molecular to the biospheric, and can be of natural, artificial or combined origin. The engineering of biosystems involves one or more of the activities of design, construction, operation, maintenance, repair, and upgrading. Engineering is usually done in order to achieve certain preconceived objectives by ensuring that the resultant systems possess particular features. This article concerns the engineering of biosystems so that they will be somewhat autonomous, or able to pursue their own goals in a dynamic environment. Central themes include: the computational abilities of a system; the virtual machinery, such as algorithms, that underlie these abilities (mind); and the actual computation that is performed (mentation). A significantly autonomous biosystem must be engineered to possess particular sets of computational abilities (faculties). These must be of sufficient sophistication (intelligence) to support the maintenance and use of a self-referencing internal model (consciousness), thereby increasing the potential for autonomy. Examples refer primarily to engineered ecosystems combined with technological control networks (ecocyborgs). The discussion is focused on clear working definitions of these concepts, and their integration into a coherent lexicon, which has been lacking until now, and the exposition of an accompanying philosophy that is relevant to the engineering of the virtual aspects of biosystems.

  • A memory-controlled, sensor/actuator machine senses conditions in its environment at given moments, and attempts to produce an action based upon its memory. However, a sensor/actuator machine will stop producing new behavior if its environment is removed. A sensor/sensor unit can be added to the sensor/actuator machine, forming a compound machine. The sensor/sensor unit produces a stream of internally created sensed conditions, which can replace the sensed conditions from the environment. This illusion of an environment is similar to consciousness. In addition, actuator/sensor and actuator/actuator units can be added to this compound machine to further enhance its ability to function without an environment. Predetermined and empirical memory cells can be distributed throughout the control units of this compound machine to provide instinctive and learned behavior. The internal and exterior behavior of this compound machine can be modified greatly by changing the cycle start and ramp signals that activate these different kinds of memory cells. These signals are similar in form to brain waves.

  • Mind<>Computer: Attempts to mimic human intelligence through methods of classical computing have failed because implementing basic elements of rationality has proven obstinate to the design criterion of machine intelligence. A radical definition of Consciousness describing awareness, as the dynamic representation of a noumenon comprised of three base states; and not itself fundamental as generally defined in the current reductionistic view of the standard model, which has created an intractable hard problem of consciousness as defined by Chalmers. By clarifying the definition of matter a broader ontological quantum theory removes immateriality from the Cartesian split bringing mind into the physical realm for pragmatic investigation. Evidence suggests that the brain is a naturally occurring quantum computer, but the brain not being paramount to awareness does not itself evanesce consciousness without the interaction of a nonlocal conscious process; because Mind <> computer and cannot be reduced to brain states alone. The proposed cosmology of consciousness is indicative of a teleological principle as an inherent part of a conscious universe. By applying the parameters of quantum brain dynamics to the stack of a specialized hybrid electronic optical quantum computer with a heterosoric molecular crystal core, consciousness evanesces through entrainment of the non local conscious processes. This 'extracellular containment of natural intelligence' probably represents the only viable direction for AI to simulate 'conscious computing' because true consciousness = life.

Last update from database: 3/23/25, 8:36 AM (UTC)