Search

Full bibliography 558 resources

  • “What is mind?” “Can we build synthetic or artificial minds?” Think these questions are only reserved for Science Fiction? Well, not anymore. This collection presents a diverse overview of where the development of artificial minds is as the twenty first century begins. Examined from nearly all viewpoints, Visions of Mind includes perspectives from philosophy, psychology, cognitive science, social studies and artificial intelligence. This collection comes largely as a result of many conferences and symposiums conducted by many of the leading minds on this topic. At the core is Professor Aaron Sloman's symposium from the spring 2000 UK Society for Artificial Intelligence conference. Authors from that symposium, as well as others from around the world have updated their perspectives and contributed to this powerful book. The result is a multi-disciplinary approach to the long term problem of designing a human-like mind, whether for scientific, social, or engineering purposes. The topics addressed within this text are valuable to both artificial intelligence and cognitive science, and also to the academic disciplines that they draw on and feed. Among those disciplines are philosophy, computer science, and psychology.

  • What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such systems, guided and limited by associative memory, is similar to the stream of consciousness. A specific architecture of an artificial system, termed articon, is introduced that by its very design has to claim being conscious. Non-verbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the flow of inner states of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills – when conscious information processing is replaced by subconscious – is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute articon’s claims that it is conscious. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human.

  • In his article on The Liabilities of Mobility, Merker (this issue) asserts that “Consciousness presents us with a stable arena for our actions—the world …” and argues for this property as providing evolutionary pressure for the evolution of consciousness. In this commentary, I will explore the implications of Merker’s ideas for consciousness in artificial agents as well as animals, and also meet some possible objections to his evolutionary pressure claim.

  • Asking whether a machine can be conscious is rather like asking whether one has stopped beating one's wife: The question is so heavy with assumptions that either answer would be incriminating! The answer, of course, is: It depends entirely on what you mean by 'machine'! If you mean the current generation of man-made devices (toasters, ovens, cars, computers, today's robots), the answer is: almost certainly not.

  • After discussing various types of consciousness, several approaches to machine consciousness, software agent, and global workspace theory, we describe a software agent, IDA, that is “conscious” in the sense of implementing that theory of consciousness. IDA perceives, remembers, deliberates, negotiates, and selects actions, sometimes “consciously.” She uses a variety of mechanisms, each of which is briefly described. It’s tempting to think of her as a conscious artifact. Is such a view in any way justified? The remainder of the paper considers this question.

  • Could a machine have an immaterial mind? The author argues that true conscious machines can be built, but rejects artificial intelligence and classical neural networks in favour of the emulation of the cognitive processes of the brain--the flow of inner speech, inner imagery and emotions. This results in a non-numeric meaning-processing machine with distributed information representation and system reactions. It is argued that this machine would be conscious; it would be aware of its own existence and its mental content and perceive this as immaterial. Novel views on consciousness and the mind-body problem are presented. This book is a must for anyone interested in consciousness research and the latest ideas in the forthcoming technology of mind.

  • Baars (1988, 1997) has proposed a psychological theory of consciousness, called global workspace theory. The present study describes a software agent implementation of that theory, called “Conscious” Mattie (CMattie). CMattie operates in a clerical domain from within a UNIX operating system, sending messages and interpreting messages in natural language that organize seminars at a university. CMattie fleshes out global workspace theory with a detailed computational model that integrates contemporary architectures in cognitive science and artificial intelligence. Baars (1997) lists the psychological “facts that any complete theory of consciousness must explain” in his appendix to In the Theater of Consciousness; global workspace theory was designed to explain these “facts.” The present article discusses how the design of CMattie accounts for these facts and thereby the extent to which it implements global workspace theory.

  • The question “What is consciousness for?” is considered with particular relevance to objects created using interactive technologies. It is argued that an understanding of artificial life with it attendant notion of robotic consciousness is not separable from an understanding of human consciousness. The positions of Daniel Dennett and John Searle are compared. Dennett believes that by understanding the process of evolutionary design we can work towards an understanding of consciousness. Searle's view is that in most cases mental attributes such as consciousness are either dispositional or are observer relative. This opposition is taken as the basis for a discussion of the purposes of consciousness in general and how these might be manifest in human and robotic forms of life.

  • The main concern of this chapter is to determine whether consciousness in robots is possible. Several reasons are illustrated why conscious robots are deemed impossible, namely: robots are purely material things, and consciousness requires immaterial mind-stuff; robots are inorganic (by definition), and consciousness can exist only in an organic brain; robots are artefacts, and consciousness abhors an artefact because only something natural, born and not manufactured, could exhibit genuine consciousness; and robots will always be much too simple to be conscious. These assumptions are considered unreasonable and inadequate by the author, thus, counter-arguments on each assumption are given. The author contends that it is more interesting to explore if a robot that is theoretically interesting, independent of the philosophical conundrum about whether it is conscious, is formable. The Cog project on a humanoid robot is, thus, comprehensively presented and examined in this chapter.

  • Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is ‘possible in principle’. A team at MIT, of which I am a part, is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog’s ‘neural’ organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn’t matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments.

  • Abstract. We consider only the relationship of consciousness to physical reality, whether physical reality is interpreted as the brain, artificial intelligence, or the universe as a whole. The difficulties with starting the analysis with physical reality on the one hand and with consciousness on the other are delineated. We consider how one may derive from the other. Concepts of universal or pure consciousness versus local or ego consciousness are explored with the possibility that consciousness may be physically creative. We examine whether artificial intelligence can possess consciousness as an extension of the interrelationship between consciousness and the brain or material reality.

  • We should not be creating conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates.

  • We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making.

  • Artificial intelligence (AI) can be characterized as the multidisciplinary approach of computer science and robust dataset that tries to make machines equipped for performing those works that ordinarily require human knowledge. These works include the capacity to learn, adjust, legitimize, comprehend, and understand conceptual ideas as well as the reactivity to complex human credits like consideration, feeling, innovativeness, and so forth. The promising utility of artificial intelligence in medical services has been illustrated with potential advantages in customized medication, drug revelation, and the examination of huge datasets besides the likely applications to further develop conclusions and clinical choices.1 A new debated issue in the digitalized world has been man-made consciousness (artificial intelligence), especially that of ChatGPT. "ChatGPT" is a computer-based intelligence based on huge message datasets in different languages with the capacity to create human-like reactions to message input, created by Open AI (Open AI, L.L.C., San Francisco, CA, USA), ChatGPT derivation is connected with being a chatbot (a program ready to comprehend and produce reactions utilizing a text-based interface) and depends on the generative pre-prepared transformer (GPT) design. 2 ChatGPT can respond to questions and compose various written content, including articles, social media posts, essays, code, and emails. Researchers and the scholar community have gotten blended reactions to this tool regarding its benefits vs its risks. On the other hand, ChatGPT, among different large language models (LLMs), can be helpful in conversational and composing different tasks, helping to build the effectiveness and exactness of the necessary output. This is not a web search tool, reference custodian or even Wikipedia; introducing genuine information isn't planned.3In that capacity, a few teachers and content specialists have proactively found blemishes in the numerical and logical result it delivered. Teachers have additionally found that it will create references and reference records that look genuine, but it does not exist. Furthermore, the utility of artificial intelligence chatbots in the medical field is a fascinating region to test. This relates to the gigantic data and different ideas that medical services understudies are expected to get a handle on. Microsoft likewise declared that ChatGPT would be incorporated into Bing to make a more extravagant inquiry and growth opportunity.4 With the world's biggest innovation organizations contending to coordinate GPT innovation into their apparatuses, new roads of simulated intelligence investigation are not too far off for the field of schooling. While it tends to be useful in numerous ways, there are couple of risks using ChatGPT, for example, expecting that it produces trustworthy outcomes, privileging reproduced knowledge made text over human-made text, offering individual and sensitive data, dismissing the terms of direction, and expanding the mechanized segment. Furthermore, security concerns and the capability of digital assaults with the spread of deception using LLMs ought to likewise be considered. In medical care practice and scholarly composition, real mistakes, moral issues, and the apprehension about abuse including the spread of falsehood ought to be considered.5 Chat GPT and its substitutions can provide teachers and students with equal opportunity to improve their learning, a significant level of creating support, and bearing an innovative thinking. Moreover, with the execution of any new advancement, regardless, its usage conveys numerous risks and the potential for abuse. Deception and predisposition tracked down inside ChatGPT's reactions, combined with occasions of cheating and copyright infringement, have stressed instructive experts. While certain locale and organizations have acted rapidly to boycott ChatGPT, we rather accept, alongside Kranzberg, (1986) that "innovation is neither great nor awful; nor is it impartial" (p. 545).6 As the world discusses the instructive and cultural consequences of ChatGPT and artificial intelligence, what stays clear is that the turn of events and improvement of this kind of innovation indicate that things are not pulling back.  Instructors, overseers, and policymakers must proactively look to teach themselves and their understudies on the most proficient method to utilize these devices both ethically and morally. Instructors ought to likewise comprehend the limits of utilizing man-made intelligence apparatuses and that, while each innovation presents both affordances and difficulties, they additionally accompany their own inserted risks.

  • The article attempts to complement the modern problems of highlighting the criteria of strong AI through discussions in the field of philosophy of consciousness. The popular ideas of D. Dennett (“multiple sketches”), J. Searle (causal emergent description) and D. Chalmers (synthetic approach to understanding consciousness) are compared with the history of the formation of the AI problem. Despite the wide discussion of the problems of consciousness and artificial forms of intelligence (strong and weak), the theories and arguments of philosophers about the psychophysiological problem remain relevant. It is assumed that clarifying the mechanism of analytical work of consciousness, the creative potential of the individual, the ability to cover a variety of phenomena in categorical forms, building axiomatic and synthetic judgments will expand the tools of machine learning. To complement the existing ideas about consciousness in the context of the prevalence of information approaches (D.I. Dubrovsky) and the analytical tradition (V.V. Vasiliev), the key provisions of the psychophysiological problem identified in the history of German and Russian philosophy are given. Given the complexity and versatility of the identified problems (definition of consciousness, psychophysiological problem, definition of AI, demarcation of weak and strong forms of AI, the importance of language for building structures of thinking, analog thinking and its capabilities), the content of the article is limited to analyzing emerging trends in philosophy and identifying prospects for further deepening into the problem.

Last update from database: 3/23/25, 8:36 AM (UTC)